var/home/core/zuul-output/0000755000175000017500000000000015144457374014543 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015144462727015505 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000121601015144462560020256 0ustar corecorepeikubelet.log][oH~_!e_6oŋ+_mtxIl#hȒTc"[~O)YvbSpӃIbI.^uޱ F}lGǾ7JdTѤUËvʪ5VXsxx:Q~ o]E˒U{et>D -3i)xьM+F)++І f)|lTft,t%||hgݡфѢivp\"ZG4ya8*+Z,|aeD=D5EFxHͷFi2+m+}?O5dKX Z# R-f 4=}Y&bX0bDGiU^ė"ka.ˬJyٴ;;,,9MqR|AE9xp_ΡGGw洚ʿe,i9\Mt*Gݗoq*>^|'ankVI6v>whǶ#O?RTű_G/MB4rJ dX#`CLZ>!@.O9`t3Fs3.s%  H~x̄7ʫQdbo2⣒M[J>R76k9o'iK ߶ݽYLbJ|fD bXhLS!ũ^NGIE:b9+? 澬[E3:`hȶ3%iqHB<;0;\vmh&)g犁!{L@C?f͝@&mśկ@`7 tc_B ʼn5Y*EQ! ^ apdbw ;->2&8 OXQ ipzhhҔK02ќ3Z)E,)ߜi6M`ǍKdESwT < .%zgM<8zX#`zpyzu6E_~FCGF_MISpt(xUҚFUZ=_fWh?A8^J|Y&.Omh8iC@%ho` 9VW ' \*=0Fr^4IzJ[ <5V݊e}8 $ET<Ƒ,[sՔ7H6U4GUh[_~|a*C]iU?/9K0%ͧTZHĸ(JMfi\MzYMfGfAv;Q]?,TɛZX P'016O+u^U"aP<.=ˌ8:YrEdȖp[K;wU1IAz!P)ƬSf.p7`3*e7B|\SU^|v~: p W"u]Tk:RƔ9ف| Jl(b(삸8W,v`t í &Mh~˜2aGW(҉cBJGbse=\*#$og!8?#w1Z#GBq8vPǃ;-@ʈiK)#K?<LAR&4N&[o%6ys-ꭁ2$FaZ 4T8C Y_ xs߉H(pt曭"g*m Ei` ҒH́3e &^pnQi6TAapp'~ug0u2&x=qaV`H7 $%'2S'ƥn܁ *d UsQ1;ExK#YCXhL7rrP!úE'}}8Zѻ_GE(Ɂ}wBnCjΘa4:B\Efqr #S8m2UߛKZfxΛƍu\!xR$Yju*'3! X XDLv:9ps1ٮz_tur0J!׀?~{%PPl)MLg NSਬDrRVɎ*Z"?H*J vVioRY@5VplBJVG;iҰ \9 8C Y89@B@|M&Alk 3YґhM^Hp/DvfjpG0S^`Mg1$qdC>A NtCۭ tkQmS}ZHB21rz)#8\/|-n WZW8!O`*^>J*.Hnv( [4@:2U$'lnDbˬTLS= tbldV[d}+""U>Q^#c=pՐX#;9ZTK(5V҆xiW54 yD-쮢¢MΆ2E;QHٴaCY p* /ݝe* ~e=lsFT'ZEyH+EZEk&[JzVJػ}+uwez->zm,4RXFn#4dy#-깻ݷA^˜`GRFۊYMEwώLE# &̾ 'ӌD֭{X,Wn|&Blٽ|Fu ۖbQ1BFgs>"и4 #Q1cQuXҒcƱf$KfflAȪkVo-+M"Ejv<SX8 1\CKWQL + NZbV_XBd`tD$6{kp]gܺ;<bDj]Y'm6g H4L~EKn#ul?'fUڪA8ZCR}# Njypehe#Ů~XoM1wXW,ӬĴ$oD>.)}lHhIެ,^XsZ O-[y[:NYm OlB^=j,kŶ-@jK2VL@ w/ppGģn+ 3713|@VQwy3e(Y 5b2ʊdr_638Zau V 8EmBvN >cucUb1Kǵc{+d=-JV+ړ@@eb$kT4:|ȍu1gUGPsX.w9kdquuPi2+Odyԗ%[xR/J`T;{BqE$hhFĭe]Pm{N f[eF vs˩?{;'ՂeqVԕP<ۖ)mضe`W9BVȄM r*c."S Oj <= jNOtzY >G")h[TV 8Iܩgж}۵I/Hm2DĞ2n$bcFlύψa!quW>Ƙg/wS~bY+ :'v'X +CK,i1N6l:Ov?}` C(,ž U0o=_ɂmUqu5;,,)TTӏŤd 3Vd$^#^ŁED-6n: P '*tCt ZFZzRoaF>pb7軪-DK7wyXaa ^)+\VjDU֙AJk$"/׆q]fib/\Yf"nG"^'>x;_ S(@~,K3NT:H8G@'Ϟ(i] Y*,{ݴ[#&実 Eҗǭ@~2i]˄Ȑ1(梻m=4acwYj"y#A#jERҸJ0wv +,gC#78nnfo;m š ˫{2uBw}e-gO%i ޲Q7(sB]3fN9.|Yq Ku9%ؙ8Pz7rC]=223]6gXQ|ԼL}g [ܔa1]f^T0ƚ. ,`؃]Q2~ũ@ўDm`0@=%h@7/2 K[e=pJ8HfDc}BInI)ݵe(pq!B;~-O<K \>@ߓZx F-@ýQWuI*Xg2i R&iW>պ)Pq |bx5 p E!7jTRm&Dw7r︞횋栈`e3I-2(^{tWM]:-d_Ճk|A:E/T2a*NI'pc_KnT)|g7T / - ɗM_Ba}Q|3R{:oh!TM#Բji op%Q6pm#!i=[L"Vg-Qd\#hc)қ6b  KF̩f2O- I*Ɔe\z<F8K|d*:i>HE!ZN[g.[nQ `%繱T~5;We 5vm r/iW500v r>LϬ܍""7W>P&R{`W|E 0{{}z2Q ..oމwV;"8A`vܣc]39'Io+p"lTyag*tlJ䏴Jt|H|$0r!  WU7wy}]7}pDU¶BvGa_WUбLtY`3}*:&w~NdI{#pwCꋒp~nmsIw^hkK>NW3dS:ܨ-o3-Ɨ_XԱH}7]7 R(y-]n5 l_|IxW.`'3=n3E@g5 XS+/1wř%FWlܤ+}qVs9- c :U]/ 5=T9.#uE>pd] Mrkmg4/<,F~~ӳG׻G/xqB63uc9Q# q{I@ۣi k*=:mfJy ''%G%@\e0nBC{# @{Kd~'w-as>b`v0 N Iz=u<_1~+0ۗ3yc`O,b z;  ̺vq|ϟL`t~ p"y]H VΜFЃcyayv=xw(~Ok 4Hz>YثtA6Gh" 0;,87# 0*Pn?r<3^R9>=I'P;"0 $HOacI@doq o vpa b,[92Nv( ty>_)hi@.=(gf#ϞaPv+Dkt&( !s" {JE 9Nz,aʟ16챣o@NVWvLt|ϙA8 r(/%腿0~p gKn)J 6ҙrC0n+#InNnxܱčq@HuJ1 d-A3s[-wLLZ][ѹӥ1 z> hSLܿ`^$(:0= ӫt]t{<P]sT{.wev?5/rܱ6j}pa3g>pq{#='I |C4)i/XΛ43꽐$@/EM>rQܒydR37ʦ’OȺYY}w\LaL}aJ}$FZJ{3'P\[{Z7GG&6\$0+$LQG8y^,PNz1 묫i?b1Cy@t2\#uQx1.XxJɜ8Q{8CqdEX}I ev.١;1*{O,~Gg'䚑ULPYwoE]M <rՕS8sޔ䠃j" V]XYmtwu[ Dc^r {}PpA菃CwrбIl۩7Sq}xw RŃ/ꁪc0Q \ H.}r򱿪%-{/t?wH[nP+W 7r5ߋ2diP-fIEt\D5>[]#>-qN-nS˧;X`tÅMbLjA wZڨzJZ*Q&&*b6lR]ЁuVVTƷ : µA 4w /E L\BX9uZ 0dM _Eqb DP1yY3궂iZ_KRm͗䊕kTZ̊]P[Wۡ F8?7>'EYÓ>dyR,3ZR6)Qܗt{ w[+~:N8blXxvTR0 @^z ec~TuAhOR:I}j\yG!P-"ch㹔J@F Z"SM\ <4(/rqwq@|B9SG!rD:ҀυHQX.((ec;(D(]]٦,/JƏT/ Ii4aaAݱi)tvLD>gb7z t!ix?bƮE|[5ϟA/K$`^f sH2oI8 ݀Nc 8+G g>A|9$:=&kuv4j.SX J<ِ5UsuF}0lkg֏bƣ#<35:c )ƿdAqaQA>lTցu$IU}Pj%0)bX)W1( gjCo߾>Gi"݊a{+yϷ4]5)ucUsq6g? `2b/uSX'jeMCw5NCXu<7Ci2Y"|i7kEr:l $%eS:JDOr؃[G_gpo&M Oqĵꪘb}V+5Y//]9\5.%\2n3K07e/aV zg"P/j7ilG4*1=< ;Clp`F<T =~*;)G@wp٪Иg@̥lN: :ۀFhl'`͋"=lsi@W.:w!j%[lNd=جEݸĜ@_ȚGV0n'sE-#C}S*/\f0m-sޮ:6!v~OrP\҆om5dZsTϥ*bf ȺfcE'Wģj!ܣug#,Џ1jSk>kTWTrO~ (6{ żWTvMwki{?@,)Jw s" G4>3+^/W:SBfʋQE5{bUgr *ca@4i2AˮL0Gz#!E'ՍčJYԓV6ekn+>$/[0\HQ "3<O/ {ė(cPRtz RNڍz:5TQl6ҵb]}?4=Z >9)G5,7o#ڍK'#/C$YbG羅P~jIgwfu! @@oc󛹎f2;~-s['ޝ׶٠l8Co ˫ܡM"" MFtq-]W j1n]_atVMK] u O3r`ObFH6Z]%>ijSiY^:V* ڝFSʃ1t-DVFFzS0e 5ہQ"Bޕf ݠ UQ-rnR>T")MHt'*3v'-nL1pU@; s3Hn6W"0WKb s+:ZyM5t8I  Kvڒ @֊S& ͎EnD qt~"OJ1hhP!AZ3C(-@ ؙe:MGiq62_n. }z͇n &:ɩKrwߵ2(`]|dѡ.lG-5Ζz 7(M#<nFQe+@GxYV*Y 6h.@/D 7A*ґKkqZ|cq#mt' v^od-StW* o 74#kЀlvfNxyI_ʔ&ej4 zkd:z :?V˫sT? 3Ƶ@u\|a)ipE|9Ee0ĢU=T YzlOŊ^9<\ӎՓ|!2276^'VWg)笧^_|ߜu.^J:#@&Ca,`D(?Կ[%vHՇxbf2%*ު-?a/>kC/ /9?MRLo~sx;]?cgY85s s[9+ o'AzU얯h|leIC J# /{n#slFwtkդ)nKSe`>:qj ~H_j/^u5I I# yp cz4~|G5T[i凗3`akZU-g}[WJzӧh:և+2l'4cZopƾֈo?=~}??gyc|{8_ZoƜtuw^;Ϲ7+.@6v[h|iОAXv+fYG ՠ>|bcKA]{N Mwwgݨ͋=ͮ1nQ#p8~׹%PkMnwo^MD#R+DxMM,:իC\5E-hYm96XޣWuAq11@/rXϢiTj\bH*s]1ڟle ]JN%px<~<"㟃|MA]]oˎi0o lD9#;I5inԲϚ4c ~Hk: Z0*gRjbH:,]sc J(}P"OmPP nFa)рH򰅰+.ئ IUǀWAc@,˩<ۣJ씷|j ZW" X05YrA}(& Dm1*(-:{%Z" &.ɱf$pL[:J ̀V /ͳj3F7o([2gUcT2h[di{"deO=p_+s#$?wA9eLknxT(5Htov.vQ]ک+n9'@K 1ۤAzp>D2$0,&  ,34"c(" ]y`kmTHمw#) vK FwOv0wQ̇~'>z{G7VrL}g4dXV.i:"c\Eetx,Y> E Ltp`=$S 1'cVjKYE8N1E4I-{ Eߏȑtyw3V ǦlM XJ`()-n!ʎ4fڣBZ`dH8/͓H\SPRPl^XA[a&,:~ npX[׃s,gVcH;2!@LiN믋'MZRꤶKEN@+3 4/i(rGIM2} VcޗsF7#YQۙB0RsB=q(v-k Ւ4j-E bQL߬DuAЏ]v髍`DQ&ދFK G7 bʏ iJ:l>`vf8$!_.ߛ.8+ѫ` 3d^e +kK!Mtz F\TِPrUS~ tvl o*lcc*ʡ-E/4D d w]ptcV½-M)aeŘy?~/]=?=Y¼ |]@SZ|V (Xu+\x7|%H :tOP4Q&nCG\q*x/ 7LGxu쁱^)YAjBPD_< LJ+ZX܇jI[ZPF!H^Rsg7˒4y 4gڻ:|(:4= )c4q z0ER7+!*0`YmK%euçܨ)b4T> {0VRhSi " _sLhds*a`V: ke@JV)ޑymQ7BI@p]^ء- jYLфqY6Qwppk==ZyL7UKpԆ_̠$]IyJ`j& dRBՠ,Ԉ $ۏ BZ9 AEE%P63ELo3&/*5K0Pr^+v*kv{^}?1+ErXmS3Ֆʣ!TJB+yQig Hvvg<~삣[mxVt@E*^ENt>Y-5#Kvo E#5w^?a vsMǸP*M}qbpJ'щc2('|ԡSvz[h.{n_X.b`^x.X<qΠSG}I'1tgS6#Gcݮw߭`]xA >f Dta%Hq )bZF}? 0kpL޴0 T(ω:S[G N.|2ع%A&Ue6E3|QUPZ5g -Kυ8tnCғY֏΂,rhs{!7]w\z$o<8S-W)T!DшBIY\4%'Xī`\Pmu\pϴtf|꼬W]/^un<.6;/oﶻpvpc(xUBB#PGXpE \C۟,m*.؟n:q\ƲnmP:s,*v!ΥweJ7?24x MnZzwnULjN-߼_vz|8g^_6rΈsw=j 1RriA*azuӮ|1lx}xˇz{q0*$c!3[liejQj-* DUEa,C)ow1,W{js(gCQq]D|Tۧ.8{9׍yc79qwj-38Τ>I߸u#ɶDX;1PEQ%5>ydD(b:7>"vr^.X[5S|q06Q Q'$׏]p<{n]!%6 n8w f)c0GiN롐iw3@fet*14Ԅ-b߅_~PX.҇o^188 &^=,1ʪ#׎mٔ:d x͡. }Qsٳ;e.;<]e Lun$]JD\ggmڿza+m!7N%ڼW{C,Rm{-=I)A=Y^UpJ.^a(")/~;/ϭkm`.$@J;)"C1IQt b"D ~=RQ.Fp\Ζ%2oʠE̯(ȺX/r6gHa<9puqp=r>"<\8rD!X6fM#ƍO,պVQłd Z2 c""pN rɂ ; 113tg2Hw~_ BzX7؊sϨEi3OwMş{mve̯ͬ鲬h8*\2ۤCsj8<*i PyqhTQ)AlVHz=gv(-nq'-q4VU~3Sk05ObpRĜ9n o㖨PAzyPbeo4EE Xł%e{@l#H:[ṥفŔR)v i}e1R DDH%fwtl1ܸ"<</vjTȡk8JyVmfCYh-ɭ5׍9 !IN쿤H:u+Iɋy]eN`kD-WZcC,d$3`c7B:sQϑ0~O@qۋĔ%[Drh?9w!!YǥEkmH_l0MF MN`4M7g}&)e(ُ_WWUW#z8D#@4rGGݠ6k%ͧ^< 1c>l2wD$ZmZBp69DQ gEY}˥!P4'hisq]wsӴb7!pC(I6`RmM*C԰KS*C"V "JmZZw9>o8 2.'ƕ\'AO -{\~οb`e6(hS Xv Y&ks ҷ-Z ,n!^ g18yܺGLtKF?Yb2h-ܪ,Z$bTHKJ|Q! b&J|kCaSȱxs/Qmez}kyan1\S@/ خoX֘YVkT!mS}.Kj]q\mJMjhOؽu mP0_C8Yd\5RNӲ?__݋xQ욷UҎ^j5?-N*faD!NTJ 6Sj6W?}zAn1&4=Oj=a_CvB`:#B&|b3`_I}A{2!-Khi=`1#KӸigG_$u>b󾢓JFd#e p$hXp[, _I GTіZڴ#u;\J6A񓄎Kx܊n!:qL~=5Φg:]#򥽜eg( T閽ԣ8-m|˖AOʡwӤHt0a#H,2ΛWR[!RG7 my2^GH6+ffN޽b[ڈ G4왛"wء6fwnr{0,s{q6$/܂,;X>N‹aF]q|ӈXIXIwozy{m݆.E4;_ce˖{' ةC(`ʂ뇯)D|]Xxͅ5.)>MC['uC^Gq+c JHlepAY:)]AlU Θ ٴH.rK`%T r=׶x˥0 ӹ{đxQ"dU'"GЉ#賤zͧn{]_Nux?8?N.b;u߭ tLcGT\1iro2,Oɉ 3:ˆ)zD(-[?hS/NV9j sؚ_YMV R P;'x]BDZv\IW\A D yЈl0nTs}Dx\G~9!:\ߗ=% Kr~J t2S5逝 *~ꦠxBJ 9Rmuf\r:rVslz[jwmHn$aLg@ù<끜tz@/ػ{w,KHUYzTìZ#溸8_8M㚴6z;  b fd$YY3+@dɮ߫ug꫃SMZnX3>'qF+iR$>J8v3Gw+btwȑ|:`6G7zԷ_ \'0RLfIIf{'{j7wuׄ{T9:kwf̪M}Čb"^junUej{3eQ2-9mCOrVUn˟bfwu(NtKt̛T9 Û7a`2EWq 3WoJ6W+<?#8(e}2,ZFUf6l妄 a֠>[ION9r( ܝwjTc3x(`2Ǒut%K#Z ʡOʟLG%$8NSTY) v<{ރVLCݛc$wC"I˳*^'p3i)m%:*3.g\Wu]ƪm{)k:&(r<>}}1QC3SW2 AЏTجv@f!>*pv!9oa5(D JP䜎I8D!)qt"|+I%;ldVcvJ!:`J!F[dֹh2)adjIW:q>ۍ>~2#W,cڅ\uv! HBlpNv?dϿr<c'6qT?u6Y֑ق:aAq?ޝh]K󄂀Q-'$J1O a/./gʟ9_W9_Y`lׯoQ,YG0.IL_ɟ2.z wݮ7NEe'޸7^Qi+a3vŸy;51bco~e@vS1!Dǡ Z02L*B]b@}Sv+!>s󍫰GM+@ؽ+P3*Lwm0oވvG EAB#c @#S) K͉b٨e>vW{P, ,C(2u,4֖z}xw("1>Lu ݆V"vdw M{Plnd3+o W:$kCLb)C ,#F6dZ˷d=2tEbwkRX;iqz){ϯ6_9~Oq\VbnW}>VoT֏Շfq]lW/d76z/S_CP zq\NpUePX-(76[:llHb 2#,P;scV&laƻ&AyTJ5EJ@he8A]4tuPF@y 9XAo(tZC%Q;PF㞚26嚔!(R!3FDV| L Cy pNA$@vFu?2P&MI`J A1 N"4rD< 〫15TΚm)kG(k(FMwMTZ382E0u,V2NQu&A,\0r)kG(kWB4B9`"b̂&r *m~dp/ݼX/|R|_ׁ~9#ζɅIVCȀXA,<1!&y^aai΄A߮u߮?~ ~تhC6BG.}ljIˤ80Zr BQ)'+LN'J1|vfq#~hN<~QcvF,Z:yBoPrV5d\Cv[&g8P.}e w4ͽoǿB w\?e'B'&N'_/տu7|FNkm#\(yHEt7]!(ZQK6 YY~9&p,k4x$tRh-vdv|ͻBMvBe;f)U@&mhK'@#fǀ:g ]iMftjruzx:6@TWl6\([\\gNxbG=4^~wxNXc{0n<0)M>[um,}-A:$j!QcJ%B?5,̗`2nG x6Tol-)S'R%.j.Ym^\|A}ތea䷔$m}NIRtjӺ?|uD"cRjs|YdO%Z@uil7B1R+ `kF +C&`D{7`;Gum7Gς+>j!JͩzUxWlZGGSc [%PfOD/&O΂T~~'s"j# UQapkDQZ<;aA BS|IǓ__/~^jCزyyx}q(^]t,QZʜ^nh9zꗖS(G_S;Ëu~jD|ya135(ߟrq/-[(G38$Xn!t;6&Zяv^ҭ(I!Ӝ q9Vg*b4JEAj0Lro%6TkY,@%.YNR" 3FhP c":(UʼrZNJ/]z7צT&Sͅ%kH1Jml0}ݹ/5 YᤋE\ULd/jQhmRYޣg#TUe*meh~G \)`{;ô;#EYk*ڐz`Twem IYp{UxpGrU"|E*BXъ٭BgVF*DshƦTaV-Ngyҩ: AԪ#b 0JAX5p 1[Al`:5JTEDe%R*Б[9$A}VI)JzہEI)pt`ti5[N 4IK#=kpm6WHɀq@JWA`Mӵ^+IKM_((E͋t./ܼ/PJ ً+ie|+5JnZ)B MdUqk1A8F6H0"*Sb Y@=C^SI' >E+[+80pJmנ *?}tḩ ?nWC6l#W2jsC0Ww2},`x,'G+ i$ZNcdUpT؂U[Ƅ,oWBJI@L<2w늚b cJ Bk Fr)[ } B]r& NI/nR D3ẁ!{k✙!~PU9(F=) 9A=F/ayYJE^v:i'Hz7fw ӆRtFVKI(N%GBPpg!u A鵐:*prxH:J((lY3b\ϳ_6mFl [UybG䀠H*HR|А7 &@GX* TOTg"t DM&@tdY.=0{&`Z&_8Zƈ7)'Ʃ[nxNjךѯ{Fޜ(5d^5ɫI3Cl%nۣ" C\*e'og+\ q7_Jn8׀cB)PJsR֚8h́Y$*YGwJDFa=MKZJ! *ŭ B*KUUǘ4NT,.LI#k=2)7s?ZkY=(%#,rw!UB kJ>b_R?g<(3tP*p[6"],LXo*\s7ZA}pFC6*9$S22 S SJFaZF"@aZQj@.I{ /֨fi ȍ`WEz_G.ÑWlH!<퇐ZF:t:ԇ<˂ O=8<=NP'wzeIn\ w><9ϯ7=vd6/)R=if "t[ϱO8i,~CѳzJ}9鯈1õ>*ij:8T$|zexuFPP\p"y[Zzs1ͰwƝ']+c̛x`A8;ΛYIn=Jwߋ(V'MڑGfiNf,{Ϭj)!KG0,W& ^Qˁ ُ\uHn!9 DnǏshda^,>)>j'fGGSmgkeBWY7tJ/`C1OdQ+yO帲4D#eG~U9l(Z ĵ)F31"ZnZn r5-B4{1~v+HS{ _ Zj)h,2jϞϷ.7/o|Sȳ_7VWj)o>>/WGR![S6+k[oN,o*+C Sc`Li3@XyкLe<~["%4hUM"i;_M6+qnL+A5#w(A޾yq'k4wx ~>M;sՏ\T?(?4 g/88PtəR ='ǟNQ- #o`ml Ƭ~MMԗ˘T"rN Zu.=!WNTNqHU!=)st!|R{uRK!5'MqX?I Mƪvbv-!?Q=fe}V&wxDk̷Y}AĖh6Z|sN$RdG&R+oN$`#{]֔Mԉ۩}qZ D2O֮9,JS-0Ji%ӹQ31SBL E! RL2QM!I!WH!+`keKܰC4sHlD0(↓%yg0ғ&"S3ֻ&#քyjگ& GRHHzQΎtȕP\2-Zz<% BUYgL7KpgY19s(A{v~I9zS\I+{>rd?H?*?T(G_4Ty\O$//,qnAiߗF?i >l6OyOdʣӒ^!NJp(ƺؘhaV F?4 t^QY#xgGv:Z.o$J_k$!:EV -X~k^Gmﭛ"i{lzzqud6qd]("͇ eBb J?.SeAhܼP}xxxа>sڣSSdΔ׵/^=A6]yU~J^%תS}$@(hY'ܳ$1kHHb^R4]YT.uvS:6Q$ZWwCh<" gGI1L]$uɵ搦xA6˶z0 7H|w7GmZLii}Nn:tIǠKU\Ank&9%5$Thr89#{lrƥMtJ!ˇi/1yr81Y(Qs1um?]V[P沟ή\^]9H4BY-R7XMSϬWIeT>D.q<*N*|e'_v Vu3t @9Zf>*!x^D{Iރٗv>J?tgxbxoO8Ɏi/:ըrksoy<z U|+URMXZzR0vT(w'U*@\!Oe8UGeQސ%U*SmL^mATH5txy$JU`FDMU9c)9S"S@ ]/xn@ܗ[J5h5R^EPxM1D醭aL%<˕QNi J$no4g/.2k]ʭ|zqW1 yVO|ҷfF].*jQ+bTZ}a$4vV{tXQ l ,Tc9T`W1=3$ViUAGW*I4kڝrm-=)fF*_UErZq̻]@pۦ.zZqT1yhXwig[GLpP'Ϣdb(b u̮3\oҶ53; ͹!24 ո7(4؜Ԍ lL7J9hkEckj- /#|_N˫my3cx^aNh:F1 xJҦj<V4Dr AeM2(cD3+FÃhP=XS GVb5-kй0W_DļG@]YWf)l@ q8 px--}BK1Xr:VM:(&*o{b[cZL"o f -3i$p)L)VmQ4\s͕fySL5LJkT&U-  X@0FR|FƦ}׷(}[ VGežݯXg͐0.)PR Dte;0??cUf,<:t %"zdbtGsR"Tg|z7;GCh0czـΧ#stEkeT;x5W\鋋5A V^GE>ެ P˯OQz!JRKCjdS5`ĶRntpK^I{Yx-.uNzFbMYM_шF1]ȼ- 94 zm,O3>fM%k*A{nm*9gS\<p!cfR `:g+`o{lW/d&bP=94̼* ŋ)G4 .&8/CZށE@0xUw2 r5 7 ;Jc*)e4opSŭ pK|ٺ>nnZ7kFA>尹#;Q-=+H1I:{vgX^:\oU}*vրdb8s3vն=wd[^\L76wAc 2XE5>gc jf+< CZ@| ߣN崫RC2Xg>|Ъ.|=y};< h;@5B rn2ԪgYg JkxgvߍXsU !r+vR)d~H JAoMFܛӖ^X R ({VP«@jIi<.grje2փGCʀϘ 堊WGa- KF%#s\Ws3rUyS g߃ 9@n[rVbDNU}y]kcWWT>5)FB~}R)HAxP )^^7PXaDjŒ6Rd)lh 4uIc7 Zo1" (nZe3u5]; lTeS23+TX "a3 (*zjCX<,\B&L@v &GA2Xܚo\= J2+*/H1@J7R Hcd94M24 [85]Zs,D@}ֳKfHx:H9N{1P͹$~ dn4d#V ŀ "V<RwΡiO29,Mɋ՞8CAѴWVA*UOk:0xFI(Q#r$I%ֹX%}W&M/Ma9uwFj=c}P*q)A)&u.*v- b Y\ R W?Z4 =Jʫc !yncEh`_ M$S0CEYR:bA *;AV{〇;- ~^ v?Mϼ(ob׾Q[;NВTH`jlifb{֧5W6jDp0VY }/*]Fh==6,q=ں%~܇\τy(mRDs)kawTZSxeNH'wscR԰ymQږՠ Ѿstl%]*×KF ^}Y,ޭ#ޡxb[c)R納};SP(CB ʳ䉆1o@>;OC3 0C<&>m4$i(o E(s t#A]CQxgFv&w Ǹ /`RP- ^|4$V䓹wH+X~&f^?G('gu{-pZ?~9kvyeFɰo >Kn6e!0_Ƈ,xYFIo<򜕺>n}LiNۭvCڳC +}!RXl :q$ş 1iǢҺ'HX}k; Y7&Z^, q6\jͷڔ59kHM\sCPueOdiElE?GyiH,?Xԛ13/Ǐ&9릯,2!P%\_|?+Y ZK9q;U܎K7_pdz= *uE,)gJrSNY\2&:=~̔_L=. SV; )nKqQ)ðh;f4_s짯M ..agLww\a:SԳTۮms^ɍݝw?_z%}G]˗.5y+ƶܷ}o'27/zqĠ"nɘMV&vTt;1Qd1] X=a ;pu$']1s\N3GexrK~4vvldqb^q "7`r},cܻ&1ZJiטQyNkBy]hTQ$uYզ dVZ&L-h0x\ 4`P 3>VʼdhFCih5b a4X%%~(%hRw[IA\ܡ9)tRDMԶ^=Pʭ̓dʘyZ@WPƌLJVg(2(RpW 8L=O,<5,KţScТQjB19g3 #aWwJ (KUcKM[s-M9Ey!0LMd*rlQrW0eN,]eיkkqҜDkT굀5#d턅 o9 D[l?/NyREv46Ɇ]Ѕ/ߓ^fVƿI5> E"lB|l׽4[*ؾyѕ;[԰: qDum!b7 9%5oawCƛn!`4 懷C31.!*F), -M$R5.xvM'TbW*۵ um ^U!}!c0"$o}2ߜ-=kC7z!=]ښFǓԳd D՘9QEM4KQA 1%ci Wz=Mժj{q+a8wU<[c*RI:fZM6gkhѹ<&JM N_5_ ɤ˟rNHz#PyգFUv}P$gk$&5A3{L^0q)Z6zNeL(J# aLmy?L1@Jq?)m x#nϲR`Jqfb1V,_Ӛ@r4*/`S 8<ȉ ;C4f(}%7>p  }_숓@Mq 1AsBj(.ӵIgRZīIIExuב,Uy~'c,=OrhD@ <0݁jslA:j{f r-v-mVg~[X̫)UܦTI)mõ$+."j% \ʹǗ&.ٗ&:Dy!zK@Mڸg w9F, "JR}& 0ƅܓ+l6,*\@~6.9k%f)Yw3[סӢ?v^e,^RDvZ,IBM| w0ZidPwשF[J[J(5 V3Z!(V={͔ˉ&u6W~#|[JTtO%/Pa8r`M#TfS +t 9n^"Lcg Qm{z K 1<(.\(MəMqgPr<#\RNuTT w_\t-x5am+&zM SlIyJs5;L4*Fp=Bz]WL!HTZTu^'ҘR)F~tAZ)CÛAP\8MɧPZQT8CV5 3VYI^;yyrry J52;@DhbEVUBM[#"jYM(*3)AHy/V8dzH6(q dHΔ#*͆l5 z᩼12DB2Orw-#l0"%CJ(^4ԑHRcF!!V̻TA DQ) r6B@SMbe,$S1_QGq,'c]ުD2{u/8ǤsERR)EY'kd_F!&PHŁNFEDebvڊlc`ڃNa>jۢor*dRHsS+ T2vϫF3IGO ӎGPMѧ[$jǀl#Ha?Zg_v4Z4fe2*4G&-]E#*8+8M '"` h frո&0}%L:R7Aomt6U5̠ΪF@ ֏.-:јE@9-y1f<1쓻u-n}"{FdDpAz%}+7_T>J:68;HLPg8VЪ rFG k-b=qmY1[{8_ovp hfJu" •P#󢥌p");zĭ6)5L0о:t.MJk =BKf*sʁbANhIB`z2تdO y-hejdy@Xg/JN&h#9VyBNQQVPdy %F 1'XTK45vj=B88XЄaP WAV -E-J):HD o2=^i A#hVJP 9KϛKE (Br,PRL ւC;mZC/B\8#F H5CPJFa+"LБA7B5,جl qˠBZ ʖ{jF̤Y&p&t X%f [ zz)xK4 Z=F"PZäC"2G]]@*l8X]iī{-U$=!d3cZV|O@3`Eහ2[,+boQ2@*trd1k';n(ZN[g1dl7,A=U͡dگRW`_`ޡxYKDQ"IzBTDdP) 5GM%Zl<y͍73&JTB x͎W.hpx iQ%oR0dtZ#}v` 2T 0"qM0 h.mnh622(k$PVuAP( U <Vp,/̷r^)0'O+o`<\?|ݻ"WCKV4Hgddqj"(R/xp 8+`uBOQHUPt/$^}I[`07 `k!$-NG߳x~6ϣw{<@q!׾&]#@#EslJؒU1NR}p=G5r\.>&veZ|C'O˳oMj몿?ٹ^C:{h |[v`/U_ %(ˡEg2kW( } )''SOWbzO}uJhLipmx]eyKgz?!eEPrG8z.o,O #`%kw%fڧs;~W<߱þ{$X/Cx` VZnhxIGaɻcߙ Ƶ*"~b+68h.^zP;}b4?/V2]SƼd-_*76vƒPk2$ ī'&^Y$? ~!Oh*Nx+޼H~iiZ]U`d$JJr/78b֣qqvݹaoX?ܞf^/bu4<^qw}!ޙplm@\WXC-E/==޵q,׿BU!Hk ˇ@.{NϮzXuKD-gΙ3œ/e%`֓yk`Wr ^;r+nRQh5w;=z%Jޱ+(9v?{ X{6+bVVkŬ,k;e؋1ߡL_\l2֏O>7Yqp!?_o0>Kb5$tFRNKf._ߋQM[L߽Qym v be?XvO=M uswԜܟ׫/M?u'7w?_'o twsumɿ|T;f{Q;!O^M ۗ%:+/N']N v|G\m{M~kZ(muRszQ/qS{]YIۼ>L 1X2MmSxqq_?n .m껷8xqzA'-~Nmq-⻘7|>CoBؾ?88?/MgK>_އnE[}>ٞI~v/j.a|޷ǔ<,,! ${_7 :|mQö#CY)@a?8sBC8V9S 9TMnMFNb3Z(#5=E-ڷWyhɵD!؈)MNr6 ZCQ8XspWNqRwA .%rK(|bMW|#Z1N9tifP6kݕTI`j Z|Ȑ9t|,MhJ!ԊWDYY[ ^5 sg;*?6-7ئXswk qTd8$)Rh0޲Elj4۽w`ӏ |}yywJ53;TR,6VjE$KG8,GDf5TMHKę! dQ$[-J$oKKsAPTk  kQ7v^)[f8E+ #PC6Ubż+!,HAh-Zzgᗉ<9JŀXI9 1:b,!?`ߘ/'H|q}|1"GqJzͦ=rmzhBiD5'Ъ 2I>ymZb7<N@m,FeJemzCqo8LL( 䌃rχd'E!󘈊[M Rt-B7KDI1>f;#hEYHҨ+"HAS :W|$P{oyg7g4ggzܢ#EEk˨7dCH}BB 1uolԊDgYG}CLұZ|knb: ȥRU㘽Ǩ gȢ1k1Z{$3uֶXLK+ŜqABe%iW0zvl0 %"bڱt#0rM(.&ņ@:#hgGPld!34 lU) ᢊ-` @iij$ ` Ვf4irFoK TsR΁EDŽ9hZ~Ȥ& ^]E-PQ`d̲`6Z߲q\32@_xK|jWbd`bWTJ9Br$ Ss>"&=_tYtd-(th.޺1lAWk|EՍUypQW Mt1ҁ5:|/}/v}h_Cw!X=K%1\io&1nuHBTfNMQgo s?Dj`eޣrܮQZ[,!n8=l=u﷼~6$(f!IsPHyj)# Efρ{)G-zm'|_,h_Fk&ŚbC0 9$5UUnU{_?Hq#+r$`> JƏPGYʫtȋVH Ji_ 62(n גε!z?X4s48(Hb5B=ΪXgV(=# Ԅj[X[L$٢ae+"jrbA 78m"Gr!(3N RH/?,Zd7R׌IY A Ltm\j1ViMp&uSڋ]TIr2*;yF5M!X! * n:Ҵ$ij _^Bjw rB%:EH ^5@hk$AwOe,%Xٲ `Ρo3`UYhҰJ& $R)!C4U_DB_TI:b.m$ cwUŀCхSxK^)ecuYHO=ӈ4`5>TT(FAq+#k@I- L}J U:TKH~tU a:"K/y%IN϶keb?]  Qnj4><WNfڨy[O lᵛ˓~]yp6x6}آ[gV+bR'^C`b!^ӷ{iuп\_]֓ 'qݕƑ$׿ aJy xv k.<3_y $5/[M()պYCluW^eFxQQ߾l]| 6]68bntlcX"$@ruwA+zkW{56}Xݻ/ (2o1@zA=BT.9EdG#~<#|9y*cЏ:@TwJv!`. G`3Tfqyye?Ym~- ApN)fV듻e /?ZxGOIȚ8pb>|w{+|_/ڰ_m6o_\+[NlrN2FŐ.($*uK!h0?kۭo=G߀b $htn_vk}{/]]/n}:߼U\͂chz@kxbȟS;ʖ+fZDd_X+Pf!`uM K$ Vf)h)#<:6yI<^-VXG9'%C\j DpXXl⥰AI+7L]qf&os}u;NVڛ?j_|?\Uto>%ljqӿ^uzo %|G-(' 1ҁ` |Ok]XLRvCY c!`֑`C4&-v61i!`A7`%^XeBH됞 mR̀r +QbGTw!`C `QK.RԦG{`9"+`)6uHOn,/,%G~|8X&K7-6$siG)ĺ=ggB;+^W_ě/"K~Xȵ7ģKxZCn }{:+3mTt^y_ݫwNhkGOyɫ-oNv_[>x,{׷z3ΟwjQw?|,@\^?my?,ߜF+R]hIծuZ}}kCQ OOWW]_l q+ѬXzXi݋~[7%߼x|5~|wp z[|9.p]}|[߾ɦhowE 6dz~5.ޠv7Ge oy=lJQIgxcl&z |>;V]Dzј}~wNon/a,2,wcc؛?6Wo8;Q[< kϟ>:&+ggݟS>{;[x͞W۫Y_agjFya|&꿳WcCe3g4Fܡ|x`>_0,~< eVm#'C3R>2~$~!9S󹘍ۖ86Dhy\H3iZ|vTWB6'NDz~N]6wɑ1LL%㵩5gB ުYd]ي(|pݏ}'E=w+|Iw0ޏ&92mϞZ\kdXMۻ_p&І8gGlj tꦓ  lAoaj#n2@6[Kll6kі^`L#`4LLlB#sC#.iֶY0gD,rh{bLq8dE)%g̼BJE$H$ ̩ ޷Pb Bv2`1dl7o0݅GP<(KK~تEl,JKթ|˦LZm V(] 6]2:(xpoL`lƗ Pq!`TGcu3Ϭ6ُG+ oz J5NdTD!QP:[ 0 B3vb7 7 }udacNf! %6 _5@;`!j. F :UKbǹ$ζfa.dЀ* @O_Bx:'ڠXìtY}f5VYfLVJ!+Nх 6Jz:iTĕ4 (R8|9`آp2ϺAȉQthQߐvs6 lnj7Nw?jׯ=ų R \H]6Րet2i#:΁a܏L 2Č Fu HggJS|Wt׷`?FI9idCJ9;dHy3J9%EmḐr;zдS mXkr/y Fho^b*v[Gw4 VysBa1`*"meh ?GӲ 7`]Tkf,AQuB[(9[$sqOjf`2V8*Lp)O Ag5.k%JHtd+^|^MCRnp 54Xw/:?o^\ !C⤠?>*[;-I zag vќVQ#*}m)$W;jq 7"4}Ǔ If$Z-`Z4GYm}X| D>,yxwHah đR=XK!ҬR+ ]1HCFGa]@ qx [^)Z78T2o`QB͛ 6 @:jedA.ܪUA R9~<ΚI z _.0=#q,"J 0{:]$i.Oz v֛RbFTgʈjn3P5!vD`#ː#L+(;KziVĂ :%Z[vrj{-{=EOuɮ[Tm?{w5ӆu]{8*B〛Ea63y`2czeǒC%ٖm,p2HebŪbĎ(A;߾}+ouM]ȰA _.jc}*X.(FJ :[5l4͠aWl63woW8]d3kW*0lJG" u1}BP<_r:@.AꀚG6H8' u R u R u R u R u R u R u R u R u R u R u R u R u R u R u R u R u R u R u R u R u R u R u Rr:۩J}FyßȪ{wO*7Qfě?)Nz# `Ҋ^ fnF@n o 7*cF#Hv]@R>!u`٣@g97H(.Tr:5 [AAAAAAAAAAAAAAAAAAAAAAA\@J=gTD*^&~`E`ji/5`2o,Oφ <)(Ӊ5rI}ELXv +yOsv_u1*NOXO``bdOc}Y`,gAzBtl&; &`~8XYu V }Y_HO%*ЗF a~nQXXܠ+4׶ ^ صa=!q1'ĺcsbeΪ,3{B,s]ʞU^*o7`g7] ]IfN@9ZQbZgzZ^ypF&dË&:`.sD\H|LusZIT瑊l{d4JGIUbY=JiX/?*6~aW i0Ey>]fY /XTWTXء" Џ\]^kb]J5@.ά g%Vj+aK%đẓxt[ $ =R cP)SUIWu˷՗^_݊4Ϟ+ݏnhPS<*S`~S 9j'PY rF_wdq.gAS:\JkpF-yU+*A*AJS X;+vO:N_wa}pF!.~>E4"XؽѸ\IqZK׸J1;ԸKhpuf%̖}m`G*r U2|VQɍm=jZ'״/(U%¶^@qiGfiۼGVhoԭ"W8}QD!(KWfn).=RW@0Eoԕ"鍺RԒXߣb zOW?Y_uMn.?jЙEL9 X(F J7ֽxYQIMԍ*'MbCy͕͠ヂe$UѝU%P^>&Jm~++ L?wnxlcƗcXɓq_%2E*E(aoYr> <ĝpeCb- #&(HgEX!D}_TlAWZFYASf?#ȭlf6咸Í48Bo c!vaǝ_g?1gVK, - -jƱ%fV;7܇IIȟUaYS?Dp!*oHEJbIimY U[-`҈ac3.J Oӆ8LȎH9*8]X``ZQM)\N2ڈHud˾1rGsW.[ZEsVNa*7[mORL2l| ) Mo;L'' FO:Z0 !1G>=YS7حzĎM>az2s nFy ֭8ݞ9)cn 쩜6}N9iȉr9=tiLZ٤}MzVE7!d]}p@m% 3Lg!JOmf]aNl~u(wGH2Q6QȪkF$,xF7I;QK^=Vt:N+*+6FSԻJ*cdc{U!'t#p;"KQ -*_:q9+Z* aLUؘHo?st|<ǠP?j p2Äx=Ph7v7[ Po`?SOW2z TSy28AcYܔ0$*afsm^v,e}i 2"W2W拯x T+qƪjwaU.Gtcˈhm a21gcVۦ RmӞl&^Uoܚţ_N@( z΅D,9"#"t^\0 d_Sv,8" =i(&ԃ U{6< FkwZ 4插MF_$YSiqi,zJqᎺ}Uڨ}M7_.u+Q;0|7pΉ>^1eTSj!W}wƬVu2Ld(4Bg1L݀v ҸZNVS4N۪<?-gO}~.+dNX4\/ioj2Prƒ-w{%ųuU-ɳeyJ._*!`!?Zoԭ=fMYx\jD^YS_m~j{U^y6l=-?ײoR*d/\_^Z {]2`v[TIi6}6=m?V2GBR@^Rw{F*0V}`ފ$A5mX;NN$Ȇe(c/(>^hđ,K^ zxg.j|椵Kۮ4MKw˯/>՛7wwYv*I@Q}fa%Q>Ӯkj\\͈,N]^[MasRknkJ;}4avjHځ!q s6g5ywɮjm.qG ?ZԶ'߮_U/p.ϖVqZA>{#h/ӿ?fΏK^X<ˣ:|?9cYc>RaA_^B3m ¶~ͬmō437o[4[M9I?'3[TzpLMRM>OhZztcb@j/ty/ gF fG Ol;|*t+AdݧE>[>=OER" # c:A\))(JZTr\$cFl'̫T/IE)m3l&?d p~lV\Lv[3Mm`>7/>Nwm:nnjv;g}'Nl ++VixKf.>&5l#ΗlO,ڬ?THZ?G&TM.lDWN糜妤_YOU:9>#Yn慙)$)s8;z[w0o%{+=|K 6Y!,b˚Gl>/aoj~B汔('~t~\*o1KBנKW5{N'^Vpҥ^6߬ʶJ2Ew$god}Ozeoqx۷mzZ"·x2 Ƃ͛K[ ZT?c^LO絊rf dF量;V1*8#2S%Z!)G$RF<+xc@ `Ra>9;X}W>XH'߰ Z3ޔ=R}ڠM}obub}~װs6u riL#',5lM9 Da88%Q®;wn[gW79PXu.efe-cjs)=Zզ{ٲ-S ,^e,1"ç=SnEطdU7:}`o1Ɔ'*bfLiSU%5:%85E'jocomsN2,o*h'U}/ۖN]j /63=O\Tk2ոƵ0~_Z[XP̫=vh^.bWZ4Sm >$A(BhuK%*ڙGD玠1gS-H0sbl,Ң2;rEI?d  0) &A^OjV?otLNK`$kCuWw\ϻZ95* YؾIкE:>ʬ~bͥu2TS kWOI1vT;lDD46i3{rel7펲QI3u6*׍Y~xUmʬ#_`"+w|vqeYVC}\.>\u|LGYV3E*}A&7;fGWy Bw9LUw .epwGJ?]W'?NG.I,<{>6 xYNIfv~ͳbm<􍊠1cOVGB/ARIchh2Zqbięba*h׳9@V(G9}fӸ9~PJ =w~[+xhZJ>䷡SyM LM]*74)Igڥ')T y1 U@5c)JOd<\S/Zq:|S!h2 W g}{f Ņb\]ZtDzҧuҒڷ<`ZYmq^ CGj1iU5m_q`>pg>W_Vk`7Aj1q|׏XNlz.p_ZNS!bkf8t^fdzzzW*O &|qf( Jj}k}+tS\t{ |KV08!6O%" 2-U+Qٳ= ׯ'hމ}U|;HEU~Ѣ%:'B JDKެ}Z0# 3R+K,E!HjDHt^xZZF",uFXk$6Ȓ8R)!jD J)ҾZ~:F&`1DHi'7.R0 ⬨ЬN T|4S2; GmB;R髟 kL*jHޕZ G)Z еٽkz@LpTkE:\ē[vV&Y@ӺIޕZ ڨjaq.(6 uQ9cyI5"w"2t8YOg+LS)RrDSȀ"DĘ[(MQ oe[A]ITfm|29MN$Q1hR_~&&F%,Ơ~di]#sxW2''6S 0 8R+ysvVQ<GaINt-&"o pz@1,V[$NA-QwxWzK2pӔ:cXiY8G8QA]x偳1nmG&5(`6IЉF (im{x;A]ɬSb\9cnxT>ԟ໅*LFa,ҢF0䌨߬ Zg: 3C?#(hVoLcDˏn83ۖ.χdAx?cl@`8= ʷ1j\H"R-%(&Uk4fV[HQUhN,7ϓbAuԘ$2bÃ6 %%@:KU0G^`xW"bx't9|mAR OM5;T2--}TJ#G]iw$t+G6- ${ɱvvŴ ޷IarTN{8vibSMZ؆ƱJJr1PD[}'2ql -JXkH+nzj5]}5#cd1"y =cVJ xrvHu] 5 p;]ߙ )LX H<7bmj NGhyh\@w%Fnt Ʈ iY.p'j5ō0c`|A5hΩ4"B -M&+Srt.ؚkO1BQGiDr)ۨLߎ@&qk47 +qv9 9, QਛA]I>`bpWhc4Q^b)V"E&M&2an_CZͷ dfn6YmK%@7k%9ÚS e5|?aN^6-LUn^Lrl6 @v⨪bh'lt5vyؙg^F .gPKg]3Aҙ 98.$ieb`]uAmw~q4T.hB7YC| -Gi| +@RH01IDbTCZ9.KxWB V T않 nS1;_\4C3­*^_OE8T}hLzdѷ P_еGva ÉQ)"s%AmBQ9c1\Ht&^l+:;`[}IxL 8a1%@aTaTgԄ IMx7Dw۾G%?A QE~)0j;˲#o2*~yx_Ypn,!kQȐV>y̢>.ۖ}!GQ\^ėj2p Qp-oŵ 6m~].B}N1tɹ>*F%u;겂(yw˳VYuޖ ₺11CPCAGgф (ʦ=c!=T n^"XwZɫp5w?Vt R xş{PiI1{vYV؅*!%AmRrqG/`'=xwOrtw-$ f9a0`؀=ȧEIJѯw&EQEJR)S졻"YaT#pjoq] +NvztW!k0F{k-"jYe7qAP5λvnDǙJ@c\#1{/{:2M~&?DV'dH owXLItf7"#Gm$樗#"늨)I$F D(I}R{0Ӯϯ<_ yUTV,~i87/FNOvy7(cQ6/@9ȧVc2gvN΀(z@F*9o*2>A4NMZlw?_hx&*?KkGpi=ͺĞF!4q*ȇ&_)=7h d&lz=((8un1E2]TTkZuL2̆qX>?e NČʿeܥ%Gϊ3ޯ=BȧΗq?#8s2CPFv2sraSU^13-@-M$dCKpd$:E1hP u #W=k|B7Go4hش$jdqnjq`y9RY% ~T 'G땃ȧ;FƖBAO k]D>Qɪbb/s-_bB~:FD4>YO'fԭ !0cr$A1Mȧ\WVOOrVRzF^>tZm7%x,w*)s]>׀5|u2ŭsLI^\;ĀIK0~aӲ|qûp-?_g@Fm;R35Ă:^ ֲKn9K..!pzb.YUU$[&)IJO)5 fñzsŐV#&Flw GFpDA1 эȧCA;'lF)F-sA=\s]KD>UAMN U qArƅ<=A۪yD=\G0^P/`;98' u9a%Yj9KlNIN'6#t p1rlǣqN@`258)\ ~#hG0ovq ~@FMED p/ Ւ{dW($P!/s|iPL["hˀPLϯ1?SYQS/W}wKdr{-YL{RaM !%G"(]Oz~!*"U) X>d1nȩzQG|S1L2;*X `ZQtH%Y2^q!r5Wۻn@FmtETg ~k77qw>B.7:YxhouyYb1`7e33eƄ\>ay'|øRI\@8&m JJHFR1?QrBRwZw!u4qr9'$kn4Zktf]5ȧY=hw3BB E)]`LzHI"&3r9l rATNk։13.(Gnk.CzmfVv EM\}s͎z.J5(:n0JRM#zA<UJ|MpT~?@ǖ nK|i=1r w|^1/13k\vg,1  3#R b^^R[t 5ZTu+Pk-o;1",XgQyŘ,vhpft4?W%fDS%Gm8(Ipҗ)냌y5/@SU.AZd4ێ_Te;c0qs㏳Zyňx4pP. <?DZxEme`HrbHAHAP9z y"Ԋ0xo ~pEz>aad4\HDEE?#_@P`#_+jdOZͨC2$eoErtHN)|%9q/ă<*OUi< 4.inaVxݵJ9*wOF"Dv|Čsy'd"r>_Jb*l90#b1yN1]BdGxtGȧ\6O]AZ ڭX\dWnkn -;!)Y 㬃 fU k)P87em(Td.l<ߍȧEdrmP''~ SM;PqErĚۇ1[VD2> gqT8A#UyuU xZTV0k* .&98ASZqϹ43cd@F#ZSbz?Kl@ Bŀ.z8mi5ƕ'Mnn Ԫ~`*5WŭuݔIuw~2SxvqI$H9oqHPH 25zK6 S.#X0ξ|4;n#U3PF`9FhvoQ`7#0%^!-Lr1d ƈ1Kb @UNg .Ox GʮќMQZ 5 h8jmFtb ^6闷1m*[ tƌV# @2<`\"Q͐,.[s4َ;gwAF>U$vTO| ls'CAaFdP^#i(O1ӌC<-uOU knGBhvp}QyE;veqØcLuޓI#azu?DvnHk'8P*[Y|2rYRؼdS|yS H5#.=E=h/78rTL`m =dUU9|un!P6 Fez@f浪9f G: e:I4W}M^aMLFD?WmDHHz:Xz{Q8ys|pVGFCHJ[fzâq?pa W3S1,~X#HvO@TB2KADzmH*Sg'u]Q"+Ed 7Q @pp1o?~?FBb hcP#Jﱽ(^s0r}X̓gxհ[Tëjʴn˯uPȦ dH'CbcR\LbXt ڮ; x^N FZ x^u{6kn^dٯvY"!wkl(~t ,|R~j{H3o'[Zއ_-7M拟fkKo6bp*qLثc0}^ۥI q_(K+hE)x#)5lDIJEfa0u{o aQ.wld|Ϊ@dʳoiӖ=D/~)u(\]C-? j-ˋSA+pK/=ׯtj].'kV•uK!{V]vSt˸ K8+- R -De*D&̡~g&.O: El'hVSU7z.Tw>o:_k2Ҫ7B-i񓏚.rZ˺9b1sBgW/˨we~.?gU-jy-)mPm@?dMHQz 5sļQȐ^""*WƒZXD^N;q:i{{:)i b]"^+P#DlvգWgKxi8e1'pi]PN頍ֈU7 6lz[S!~TwW~4hdX:51HM E-"$)Hɤ3Ǵt=77~9nDZvoUo?7 7q@W57d:(F8eI֣\b\"H ata(VN;Yfh\BR2n`z2h=zf[E9s9ZOQ8Ѭ[kz9zQP%#3Ef2J ~fи?:'sR:HMp^ 2h:Y>*ȨΙ$]ĵ(CDVF-("(Y +f ~R0I%X8NJSçT1"t^uCb#͵ pڬ/#lT\lף`s4`Rê+TEۼ)M#g9X29kpQ Vᮘ1JRs?Q">!vJ2I=Zy)\X1r6 5ra9|džy<d_R(Q0M1aŜ!-q^ٵ L\-oYBݹ_9άey[ܩY~t6"GugB]P8 ŧG腢R9_Ux Tw.B0MNRr(Cb-Cbka廈񭽋符(G0Ak*9bKhQaHǜ~F́\'Qutn,1la 30,a&8vx:' ۫Kگ8_ou:gp(PPa,S3ʑ4PkZWY]Q2Kh$5*0lmdMݿӢJYuFWنݹ>0l`!W?CY5p"{B|Z0 ƎfŁ)}{]CE Em"6Y99l%(^}AilJG*ZV|V{x&/S?Q[O,zR|v=TAus 鰵ҝsa*h)CkφJEf,. dOଵؓ*.b4*#0s[ކ9ݖy;X*[Q!GVB"\vXq.ٗy\#Wڋ 'r3qϩ5HJK;oAΊF JayL]OE $ v\[<$t?a\9S5] vLKߎ[%eD=?#u4ǗyRFIA:wv|wbK"^fhNCu@ |}fco0X pb"ar'ɽ勸 ʝ-UK8ƤO'H? $] @(s j8bBҸ?M?k+H@ZU=u2]"TχG&f8< )"&ě~xב-m$1$y2{3f=fW3m o9-)7N{,s~<%`t4\]Q"}Q~0iUJ@:.6`y?|Aɷ @ӁPNG c<[MP5V]&+D+'}4:9>vB['pT B8*cA)gB 0 T?dWzypy G &*&Gi4|Bs~ynJXlڴVyR: tBFb.?~M\ Qo{ge9>&GVQо[ ,*2~5n?]3L{z+-Px{~^ҖUO{n6f%KS^bXs_xP^|+룦)^kp/ _a&]h@k8n84'hD@t5MW1Xhc*I_/}wӦzˏMZ8̔)x6 ++fw3=y4 f?1~ z|R e>M %T$aŌXlkunaLP-sx (_WD#"∐%qQ?;sauh{E6R,`d!I!#I3VE /&&V\)8ACLC9?Yn~vlmdpfNo4֑:rLćnC\vdoOG~Z UV:)![MǍhXӋޛ!sg O?YUgyZ.^uQIQ*( '_ҤBe9 *ShɊc_%,~4+_]u_}֟ڍRmդK2&Qy77a?ghXہT!zwjjqs*n~~e4nhJ+&g5Uל>ԗ5y}eIM*rD#eoc:&<"X+bD 8×`irh{߅˾ G_GMN/v GġX8Uk|q AQւ9H|(4ݮI=2q upkOE`QZ$h%55Hp@4-sF`Y2!O Lbj9CZ”.h h;[+e>A Y&v`ev:]+{4_afuum2q%eknܶN ~pw|z-.N`ʁTHAQ?Cb-pLErY#+"+#Eb+TER9X2D*Lx@2D!ی2Db!7Qv~AWnr''+-x;+Eޥ%Ada:ZtQ$Qd6 R9&q8 ԝ%t:0VeF9)S<0C i-DTM-DʳyzWYJ7h8q.&lD ;jvkt6jѥUPսu1;t(C؂2A6e17 +C2gjHnIYzjVD➡qL }@0}lgYB˩ vG 0/&ङ%܃i w>KѠ6#Wwv(~dż fzvHuVd/Ê[NlCQIg5L ,#E6jMcbt֜C3ʲfH0-LaDLfQldU29$E "u'TB:Y""Sv8)W&:E2Ft[H;{iaA<1'F|<3/_ҸOrJvLj?KwJ>Fc>|z'LJ(V'8:,pU3,)S݋YrR$ rmVSʻIMv0(pGXH;J9'3gk݁+R'RdDИ9BΗ41DRPvn3CDeO4½:Ge/Gx~bD(ZZ@CҪ>'EGĭ/2eU{7;r2 cjQu#ܝG77v=(X-4Fh !uϱ$q8iNtfCYB e@˨DhXv *fL=,1 'QPx˜:.kB5C%Z c֝wA=dl \hϲ#Ɣ:*aݧK1deH:5$K0{NF2,2_[#,LfvB06٩b9ٮ}+d8_< fH*IOcV{}yЇA}X~NNͻ+GqI8m10S۵' U 817,r*@J@pPSH.V@gGa̱(StluuYXeRhbR4FiJV$7X3ô6Ef2;!Χe쮞!*qí:%WA(4 |ixQȲ1aw 3Ԛa3i)Oҫȧ3Ltb3YMA [sD흴bfc7#}؜;e3cNcnе!)J99)"vHӫo Ҏ09 xZH**_9 vN㒱s&W9ǯUO|uJvWCBj=S_br 6A1أR>^έJ. V^l4 7>[p/u0G(<:0Vv+_.,򋧫9}>*I, y,Ze;2ۈ/j̛qs;AR& |A,"6)FpHZKiY$%2SU(,-o fS^V~Xad㵯H0wA{_f|(RiME:~NE:tzt*],pp^SZ I#ݔTTz 6NYwp7JG"1i|7\$_oqB" FXJ1evI_HO+ xBB=c>8㘾ΧR#mDbn-Q߿}A :xea?兽5n,vvp_,:uS.JBrw&94E).&S% ^` J4|@^c&IzRkduOwD;ni?5.Wg_r[yeongϖK6ߘ6o~,z.漢rkP -$4TDEl\/sdTV^qS)@Xm,m 7/OvL.5{+Yom'cq>AҝhmbLxSWWM?evkzꃡ}G\mО{{g6Hl^ f_+?~֫ꕋӻ]??Yw[eǝ[vQѵZ_/:M:&uTNHa/ܩM4ܻnf;3gl9[[&ucYnqw`Ǚ7ui.ʗŧx<Q[|f?-6wu;gS+? l?F8E]<}ڦEOm/WjRS[˹qj>4 =S<~j?;[m> fwpr+eqĘ 1c3^ PBplẹ\mqojW 6! TØ!ׄn.#/˥xi5zA2h\ZbהYa bLk$Ƽ/i֓u?<޷;a̱ǁ~bkeLFn61Rш,XF*4w jlhwn\ B&xI@(B'P6U(4ʴT.{)bc'L+8II?1o#5Y;^]ro:n-|D|CO+n("Lf0!EZ'H,s[t}`IsG&} !o<؏3Y6Sdi3,u*GYG8gVV+S}Dmx!CGKLSA1S*<gg4 I\'i1")NYuzslW26z26`_ c{Qen:tw\:̐\;YOtmsHNq@bcp;5FD2~CLcp'e]KB(*jsHOp:6&K3g([YwHHp"a!ɹ\V=PfyzﳣC;Ӎ9:A3EDT0'bsAGە.,* @{nVQ ()4V4z3sXgq @ ]e"Ҝ3dYKbCP,1UBgA0{G dx*( 8|Z;#\ hIs)|ҹU ^Nsљm, ݸCs1X١?6)VRxA9 RE~f 28:62A@ЫU9 [ꊺR~"bq(f"f2M|l~$Z")llULG[%I)gF ~ɮȇÌ "r͍e;`$36Nxv_V<$1XHz<.s:Q)XtpӵsH4b4GWTJJc#ݟb1ʢ+_ &Ahx(+Qd9BpR< R$ŮZ `+, V/7GEPW:}9׃^L0, ʴwLNb j#mFzwcJ#k BYS aM8PKyVԠpL-MGdb#\*"ѳcY{9r+>z? {-68\nɧz5p˻{ZuR"ecjĪ۰avO7yQ$#nDV3(VD?w'7K[䆌ADPꊭ")QT . LZ#yKeI[ ClZ&)Q"6kG.٨0egpڭXZ`[5nV#5+.iwCQxls0C삜K^-h GqW}Zc1pj A EV"G9l1AQPΕ`;ʇ%_%Uj*NU=mZ}[ pQR3]ɻ ?Y=M΃Cƺ|88NV$OZR-I=')RJ.PW.2&/x*kzs;STdMId`T8$)CǦ4I0}T2JI0kJpV|uqRV X7"X.hL[ ĝ2VBX,wKhYsh\'>ynv\4.ȸQx~kկ8C Y~gc 9 No@Fp`rh\4c8_|Y:V']td?K`ScQ6H֠v(1 N"0˧!ey- O6q+ xn՚hc V&"cNKI 坲++H619Ut,UҢLNy]ߧyfg[`g%GKbR:x.:盗GqS%с*dkp(GKOF3s inn!Ҫc2\#|Ô_BW,GO UPHGpM^b<@JtRWGG%)! l.Yfw BEP}\]|Rb&|{s`[ƴbMK#Eobp5j&{Zt'Z^SSj閌!&J\|Nܬ U~?GSY8)#%X#AJ1gt*?*J'tG% ʤϩ#vEr</O]=CTiUFGE~>[؝m۠aЮA 9HoWHQJk9f +.+..6[26u 'Uj UQL`h0r>uE\%~cx Ek*K>L.dmR ϤA.ݸCh]MdǕ2g0 ic]H&mK0?z0*J^pA8JLaLƎKxE AZ  嘣OJW|'b㊝"L *לV5¹$H(I${]ӇbçX2ޚEOKbb}pT9.)ם>1 OqIg9bs@WOn$8~^n^PT ۭAIk^QRL`&<^%zܽa!.fą3-ޝ*gmoWςg pl ^WȇDX7?Ս䗭~_>їD[._iճ-tuyA$O=bfb#bZ*.h.?M7aG7Q ~I- 7)0I!'EJ&=k17v7vu&@P8\|5e/_.Jjjm~\U |G*X f((kjx\=c"T|yq.\T0K84b"Nk5e*?]iWgg%>UK>U5;q>*޹pЂjV+f◳ 2` -+*IS ' L)% `C[:O>"B`^J:oBI2,݌6+@PIm_1Hp4CdSmI?;5ZU.%bג1R]+6 ݵzP#byUW8Ma26!*@rv-Fi:CDTۥ5z<­ Kn{; " VF,bXPZgW*|`ڳc~zuhV0^hV\\ѐ1DȞ?嵏]IA mjג9g[4n!s^ןr%cr]5`F2 - :rPD NO JN epZr=bbI-+Ң_-vtby/v~?ʙWGoj`r6 v;G0[_Z1o]맟~Y5Ƴqhe~o-,&#;qEmyZ~ߣ _~ce MH~`@6I7|],g.8bO&߬8Eb*Wk3Nq8?Ba~E8[L>/Y:t'@kD!>\ #.n-Nc)5(B (Zbd415W ?wt?:1#T蕕Xij }XI؎%83 /Dȍ&o%:qcQz@?2C:)Ior!#=:2ƪcӫQO>JVzVS}?SXQtm vO\5ʬ,q>THDF3.sv}\|\y='VTj%IɸIv=(" Z yJ=5dZ Nhl\ṅ[V/j Ͽ_dEeAh6ґd(8gBX'cJ0ިQra. w!y~E2w>z0`X0|0~nP>_3w n'.G=[68a֎Niȓ-fC\fНǖQg Ȁ?>HAШ>s8t s$g2vgA9Vk9Sӻ>? gT,0+*}b<^QcZ<{<6LR3!ѯwR}~spqk5rXqL\'geT1ט!& R/2[y /R'vҪ@!08 Lnmɠ)ݛ?@uvOD.(P ˏwttij'`ɷٶkhmPx $@/* ks.0"Dnj,`_KQ7S2nNo? \v67R~ͦq_Ivr' H<< U ? s>LQ3̜S?x7e%5.rzWf6pssT Imi"~wF>_jqF|yFShC?ȿZ]~ ]kWg2vGk.y?}a|,7Kubo޶z/Q`6t٫ܼeʍpҊ[pelb{(LiA,QP,𰷿 ȮwgFe.?,uwalhn/5}8>6P?U.ecJ|WN0w0*{ky{c:KwvVzGmm6yW(D>n:.t"R5RZNU}].˕QUT%f@0/1Y*}akaj#IE獘~p!W칹 Fk!1bjZklF쬬/3*3K1~OpklMp:~*:ahɵ Z!1'T0-ϋ&bRFfس+ƝNXZv5fu&zUrnCS˻U5zqhnִpRw0/?Yl7 N[C͊o:`̡}4`# L4N`aq,"BVԻ%)AW#(&8ƭ zʀ[7h)jIAN0V|cW y$ 5~]ϙQ? Ld6 vFIߪ=sjKXf{X-4:+fY3kk)7Zh.A5&,q!i98&rΎ[ȷ_uAZh秃xo-n$UvaG0Z?q/ :8x'˷zEZ/ړ|w:FX_w\inh?~NK}|rZ ?k?]iri8~_ܽ/f;A}sV:L>Nޤ~}_v6޶'F$ĉq @#q2 sdZS6nd!y.jc6rW=!}';@YJPzg ago{ݩt08@HA'%f  C B3ÝI[up8!<$с40/p2&i< QJ``j@ "I꫽GW@b}ty/Qhao5X%#J;vjAAu i 0)L`\p\K0'j `6 Ss&[v@_]S)2RՆI\SL!KDkN ʴmyo{kZL`!ױ\ :E,eNJ.X> abrOP]u]^w_u6B<̈v4|w>/s묣@]m;7/?0 @_' NѕV,h@|m}ppṰNMS U94Z"#Ă¢$*0I)**; .NypzWĮZ՛?7>:{l@\u7:oۇ.4;{ow.9fm!(!UٔDI : XlF(7_ ^Nt:G?:LG NAkNlLpG8j&0тc?Z։uE&v;n;_#|v`jz]y}+9t/?x>=ˁ+~X~>< &|1󋥫[9Fܤ7W`{a//{3_}Z+Ϝ2g2mrfunz1%aM;fl:ZAQ)@Y? 8zV.ޖ5jOџby[vʨ?n `r|G ~܊O %=Uq= 'T;e>ɱ r5yYk>vOO|_\O;T:_U|Qdiÿǿ#Xn뿿"}>r.gax<NNrrx}9[u);M~L1{^ն_QXUFD1{|y tڳr*{Z#ΏiXoޖ){wǫ 1 Ս&yʕFlA bsmj 枑͕ΞqdX܇&,,H^=b.sR"$"d]H&mKJЂ> vAQڇN'{G/Vc+3<߀.L[-I- 28$$\7B u?'ԺlW爏rd˅9$yJ`f eaId'`-a$D1r>D|S75 :[?rxTrD5qlܭ58)}4IW<3UUa z AQ6YCt@.JS DcQ`hLD3gN.fGίlbӽzͤ.|w.()@#R)6.X |ZBBGCl`V%\c4o4`S'T?=wFx%qI9LZi q./Nܟ+-YfrDax|뒐_ó~Z$t?zퟝ_}PIW{;OfT7YMMqâG]﯆>̣Fā׫t篪rωI ypCJOnVE@Xiu<'&F1!ŌEJy9[c3cAur4!ځP4V+% g6ɏ{!cz^9}/ Q|En 8w1I"VTG@gx5㟱]T;+j&)x4i7=jt|7oVQ} ,YvNVFvӚ:Zhrh f8qO!sa,Զ"4P)Iʦ!8 i-mgXD#nY۲6M qfӍCS%>Q<&9TR3#(A+"BsCW>*İrVنf!R,QjJy36̃Ǎ>U' 4y!F.x 9(ŽQ'}4#"H9Iv1ǡ@(Ғ9gNcJSdb$Ym8CN!Nؤ5{PRPRNGQyNA1r. Pb߄W7eF~=^UW(Al1 q^YcT MTH胒'}3DGQ5dK'|?VKاG8 נI1Dy>>fQT LKDT-r1& f-Y*C{د5޵f傮!ӓU`+)CrjN 嘣TjEkMœfώ#*U+Β3U~%K< h9ƐRm>7*앖!Ħ?{O6_NfgØ(mۦ{y}M:ھ,ERYHr>~ %$[ɖGk@  aE A] CF+;%A0!]rvM Zj: {O G a UI\cGz#/HT %r !"_8qkI ֪ ptLi84j2T$QfD 9|e 1q<)|gav,4˄#r|*La;o,Q^d7jE\t\aRJPو ؀Lu`Z3[J&~u6ju,M j3H xVտqGwL|KWu{`;C䬮3'p :%m+ڥ 82?4 .T|b?p,iA܎1&*x8D fI`1i iPV8c?/ԉGı[P,,=1]p؈#+4UΩptL>g8Z`6bgU/MͺQp c*oGWnj \[8vPh/28Qg@Z5{f@;~ ׉Wժy(ިMZgkptL}гNh㪻ELQ*lA}9wc(M:wUX:Q ܵSVM㢷$#%Bl)Q{xJd#vB=̐0 c&;~h/޸\]؂BTX@&u,$K58:&ڈd.ʗ*v-D06,}pZ&Z1qsgP A(@Cr: xe =Ig=ո}O|)6HpO4RE

7?_Q|O/mdiy/N'" 71"Ą 5׷\a\-L@;Ea Du1vL=T%gVwK҂)suBЂ>Ygr C+aed(m:MLz>KOzNc(i}̲ Yo֥LF)P|vlx4AMch`X@C`$k27rYZ6m>(4v2]B>}wɤ9DLj8ד4+6N3.je΄Qwdb\EqlٍQ4F7A >f1)WKrC^ĸ+?lBaȄ{tlT|8Gh쁘=iv^AqZX8m,&iqH vS28OgwAAz &ܠ9X:|yGY9O hWH>J~˳3așl}žiy{Sk|9mޝjŒ $bs`"54hEVVNy3W"~mm,*k˰~7 K11teW5 |ɲ(J+KRhLmYyl1Zcn׃ y>i7E_Fp-pw%@瀩oqT,~;rE4|2&6}g0{Y|rx;e&QXblEc\Ec5o3Lk^,}]k gM CrW7Ӵ]ݓiZe鄝-i8SK9_Ry3jt7e'WW+L複e6 E6}UC~Y߻#x@@ꧪItQJ~۩@SҘ^C3P+X`i([uu3< S̥Ldw | \v\%_¨}͢(R88˱M{nlb)ɻ4-?9I=z0mTx9R4dt (U'4X ed<nyZ~pY}?=Hf-V>UBy+' QzIғK[0eɣ/3Ks-qP# Ew 5>8(ˣ~ 9q%<#zs-RX\?grу[#1b`*@Q o QQ)L®I+s` \ls?NܳfY^8Syl9Nv)`u *q,a.f/|\XA ;30 0&gAIKDxی"<|vKTrMFZɨ{'x?1)^):I l2=$8@4g6嘛v1:^QA}J! ]9|T2c|o$a\33[<_mƠ;K3Fel֝aSInp|b1[߭VwUǰJWϤ z% va7 Wgrf/6·jbD}N20q>UTJNkLoxu>ߩ[6&db &oWsZO'Z.μk+"v|>Z>ql f8>~S1@~20z[(T5U2(CƎ-\Z?K H?skM&d6Xhݪ9·Piz d1C6~i\Gb`ӌ%4/XrtOK^1QOJm>Si\nHw^\DƇGi֧yO5=! !s~s6`\'1sNˉ&Halaԡ&VNmb3g68ZCKo f3`>ݙ×P:/ ~k|&#`PQ2mOFIrz^ NH.@IA [ʌ$VJŹb!:Þa e7E^%ӟ~Wʿ-1nuY>e :,}8?*ZZ`]d.xfḾ4CGfUtYe,|(>^.J4WK }ҏytʍFfRf CSX4 LNvMT¹cIgQueo zfްyPE (ou:-M+@0(ĸthT@ AA ղF,j|r btίlsǿމo'̳n/r. (2Wi*0n_ARy*_QKYAE7:g20 ],92p%ld.(UXCiweB!]/.!}۳3:n`Gbiє ˣ+Ufvi7(7`uoJp oƩ7RDbkۿ[yӭ[r9qiR.T"Kd㺎oVhӑ>C#*)~}~x|8#e6=Hpvvw[y.}c{Ae_K3c2 IJ[4=ݭWo l>_Gcgzƽu1zxj^q.|6{^ LW+]{ Xm^t7ix#@8oun=~Ȓ^ k:ro5&>?nhm_?ە9ybbb*z )$[py`#yme>`0%4K?e<1UJ8Mt Fr@܉s.u2>P,`*am$:^}V%< 6ivkzJ {klCYM$4m1~Qc2V.1Zț^dG'NL㜧dqSd&-u,θ2qʤt(_ҔSj_7͍;2,瑾Y(ʈ"e&M2%3+ ΌaT"`Ntoa"fB2iꞅ漡!lxO9>Vk81lH,!F9x2~_֒k`"V&el,D)[e|f`B`"$8`x \kf F,°Yxχ?V(Z3K>/,|QY *L6dOƟ X>şc\^i<ԃN]>94--ߖrx40 LXqWy߻@+-w/3JNÓIV!,vdbh7R$CN64TJT&B֥),o[5DZA7Y8T4U lQLE-t)K4v條/J8,rtZeGtxVD^%/7EYtCpO/xkwD{GG͟-z>òB; γYXN~hǣݷrAF$z-,p=uy-{Nm(LR(6po}܏]Ǔ.>uuQ?Qç*mȗSyF!`FlFcaԯh$L۲oE4VoP/Bv K[/n v"ak6Y[vPkL^4_F }B_ގڹbY5^r/ _X2`3}=?8:ĉOZ5o;aOaJdgcSA4LDn<fZ'Q<^D7<} |σS?(.UǗGz:xb'} <"l<7xq $׫ ,wX7.O+-~o~xwU D`v[Բh]%{hԑzApd̅o,@q4xZ5mpSpx_m,H6OZi`8sRe/~200.8ujˀM|1$`A9%W3YocylaFx"e*{*/Z~߂z))λZ ].w9:!?I]wDۅdrjOudy4]~ʝLp:.n#{lȹQMôKQkxq%(Ewy/QfhE5捴ѭ7NK*NIL69iAU6 Ğ8(T\IyϟwKPA25 /%C:)`՜ z^zLc"BeKϝa^(8A}朤 @}M?fW' XklP@f߰>wy7P'G#ZEd^h4%+̳o.8V;7xQ_a)XM r|ww>zcT UfAx/GvÿU=o/$et̆kgi|7yQ{'(o|rpʅ*Fy|N6Ƚڶ(?cV]ҝmF1s9T=۸!̖Sͱٞ5_$)IX"E0TߒDg~,.J A3(솉eMckhLYbP%ʨT0,r.uBd DјS_"՚) c\Sښ֔JJ)lD1NO4,Ґ1{:Tf{+לnuU\[}5\h!/ }31h|HftLtƛifJ5kjPC-W:`BH OLJM9D3dBKp̈́A)QL29A2-ho0"HE|7;όeYș?KqP2?HEQ(GjXH}j"ehQ4EVG~p92w㞼GE>ޕӿV{7|؜î_on7v>)xѻ^XS˶[붿?PN~a~p7õ8YLwVI>n{.Te{YX)ZQBqՌI)NIJzH)vnpjQa~-$Ք\\:~| {?>~:ܻw||n_|yD1 j dE@Nc齦`=WXx #}× ʶ呏2hm\yhAw`rr? 2yԷs_j;L>o4zNPlo%8yghU0Kwcƈ"&Vn\,* 4DK\Ja! K(6)ˆKh*R* *1R<۸N:w5A !  vvY*OUиmgĺ%<Z yv=j=jb"?|<{OY8i(~<<qhݗvы浝{{ZZ1ͭUiOŤcA#QhN :9T4~zN^Q1Y](7wYT,J}-jMx}nL߀u(V`j B@wVUٻme*4I [DIa*AI*|`yܮgHa%vk*5#ir'pD+\xQą̧ס!Su1<^134'JȰZZ't2[aߣKIYŜO}!L25FEÐȒg[$%|#A(v [l݊5*>H9Xa8 ck"s!lЊK\6cЖsVԪ@c[Oc Jgʷ%(4/e,(+/ҳ fNq!], k3WvH' |"a2r/u1ZD DZ䞴]AH/P+GzW~̉J|]/rZ,CԵj"sXm+5H0sEd1)Ǩb7ҫZHA:+X; Q1qamUb+Kuf)رϵ2ߦvY+=YX`%gsUaِҮAUI'9`ϼ&Ccswmҡf1B,݀0 d\#J.q$CߏDR,wƨcJ7*tB8;JC}yQqW\ mzOvU*'Bۍ~ X1"ӣ TF` bGr꟥.K0AV}CE죏vW$.J./tW} ܃-ُIh:B&_ )ҡO Һ| TgFjC.H$B@iҵU]7v6dEzz'[O6~~{i4!\ Noc3.G~KX56LNhf{ȺDÃ6_S5 }QJ~U<w;:om`1E{5^s۰̺+t4qJ.Eլ9BE0/iKCF?!m?\}] e x4OK6ڴNKJɆ#+f)iڍqG($1dՀ_uvE4 ss^`Ha0-FIn[ڦMoi鿃~m_6/]of &w(c՗<-mUUѭ-fWVE}LLѭ!upѭi[#R!=b{3?gc͙KQbp/t')[Tݺ{&gLY#+=$YJ<<[4-FͥB9Cls/"sv+4FhdA#kу09y 4BQ`b Oư6 ) x^##GL,(" WYHepEV(A@-9onDЄL 바v;KQj)xh풕R8cܷHv+gyH56- A G`/5ue%`ZU|Ƀ@Z54qD. Xy2u/۱pϬ(tu&;&O +XMoѧ%<;Ltnu>vW)zҟ3bIt*:s[ѬjXNw.5TX@=dM0x5m0ѾkQT#xFoL~mO1\t+pt_6*@p"h@hm+Hp8K΄.x6t( )f2,ZJ;RB*9/*pR9rf9m4d>tbVˁ8p7<8>xw;j>?98oV=~Wdo-悍/گw//wa>I88nd]RBE}p?uAMMZ[o^4ͬg~Jg)t/[ ʐ} K-5^Vu~fǤ'Mw>I KEw%vl4 SL9IlruTh]<kK*&.0m}u2IgM YC`w_NہȚmfq%+"3DgI':{aAϺIwiܷGNcԉΠQ7n @ i'6<0:Q3(3TeHXg>䳊ˬ9[d.]4Y:1 5I L.]gEKi5qK"l AӘ"X|yoa=G*Pzj0Հ ~P(MB18%s 8(j%OKGX:&p٫?7ʤ W:]W. nE@}aRͣ;.#M 2n/J† "lk)[kMRcƠ/`6.XqF,Ph0&;~Wl<1YYZR+ugrJu.3}误x6,t^tn{iwGG*zH0*%%젂IZjNNϵu i Or>vXc }f»6-:ۥ$|..&V5F)EPg]PFQU_JSİY ` >t[ A.):aG[f"԰,ؠ,nVz5V3.cEg*̉ic\.u lͥY{+m?#JQz#ZIlh'd`-NπQdhA["a"4tC+oiocrzɴieA*YFGȤ &N j|Y'Hm^곖"jLv2ȃ1 +.p%c<5T|+{fe22r -%|H?Th59eڊBH$eNНd ij6]ߝ/s `˸d>|0/C{t»"Ӧ?">ouЊg߼t9_p77o65}=V tKVl 4OI]0*R`}d ^6K"$^hbtI/&D &Qik dB )\uelx/Gۏ7md.tVlbng 9 Q;{O. $g4^ VoGtZ}v.&XTѣHcTЏ%?ǹX%9(̛w174pLsѾ(dd6Ab1#!e1y9XHõf9h:FsRc0&Idi$N &]1WT)0R~k?յOuS]T~k?յOuS]T~k?յ)ýj?o ^=ʮm%,"B9XX-*TyI$Sw>'D2bPČ0<1=q?jQ7)s`﹄ir#G! I) r`G{26}<Ҙx- dO\eC7LCg?MG (4~NG˓`<AG9^d{o; 3jͷHIs0⼗ ggq*>foL }|"L}Q"@m$Yi%ϲ,0 g2s&\o=#.Uݒ,!d9,]zݔWcACmsCN|cNAڊ5z~4G5]eboVdV|nV۫e\.J;ez]@^1 \6Man ᴒ㠬tP u 7/r4*:crwa6L (6Mf]5i}t 7$>/$k> *vJz=:LRZ 2 ;3,k` >=Id|>59Ӿu1X#hӦf9o[906fB_2ΝCM vS[zo{7+ A.+zFG௯S1H6әvKignNnjeWp` */n|]csXshI2W R ylVN6௼jc`ge\y<dz2H5})SPCTkw5`xK3qRMy;LB{-cء g)^oq}p2C{C@A/8GINƁ6ubT;FG\N%g05-S>~( b 0ڨV`8@ρNjn7HcL螔ɫqZ}`h+&"8t;>m #.nGBRjͺy<(M ?ILi݅"y&՝2ꪩS|^:NnԛxݨWbZ-o` W9VM,Wb$u x{>E pVm[e^tA:hWkX4E6~%ebqu孤˅͡KjwI (崧:Rf(砻\ၮ+C#{zke^ ;h6q]Yj f*Nҫ(= ΈޑUlm5f;_?3TMf|pYD7͂?MHY*\wO\, !!=EPQQge0 FQ-Q>e1_K]xNd=FH-'LQQ!8%lkq)2f`ޟ* nc~INS+hDa ZNBI@(b4NhXN( Δ$vJ|aqjqT7=aY8#mfTg^"++6N',mu=*O+ן>}Z3ud ~1$%&&0",w?UͣfZ^=U>ʇu.߭wt|tKo۴cfdq YIAwu}5޿LpQ#9]O/z^yz{5E;y1AEL :ܪ~J4:x^T'^vQEdȥ^s'@XPX p(n1<"ۋa=<^Ysa' |8a;Y<$ۉ㹡FAx)+ww.k*ò4&]YCѭfjfɫVDwY;OnG;F6z8B:ziv aזږGFGaz^"\,z;q/wPK6cl2 neM5}U.dL)eȮY~XlfYESTy^[;$Y[ `xbd(s7-DE=Qb~@ lPB␁(i ߳"$mo2AoV[zxѺ#. lF^ndndVJI8X _ߏc.|~XI$ u >qp>M2r?~s8{1T?n'nw",}C^,b!gj5=`_׉~bucpy쯯^TT[dԎLiҼ(s̕+5^ /tS ŭafT;CҭѬ{@4E5j-B Y'5_NQj՞hzHkZ8Tmoݖ\8#^ZZOWzvMJ 3a}P#X+ .ˣ^KivG,*׶G̡F *jt%m.r-$r@y ]Kl $Љq;waeL'D0J:MG0]+pk<:Mmñ a= " (SO ۮoһ#go !WʞlC2y@ɝ8cLJ/}qOWzAZ~|'\|Lؔ!gV6IV@2n$9Ջ7xIfߗdM.<č =|wA-/G?BvƽVyl -τz<y(ohx%oR`}i7؉L7=3k "tC\<[äHb h…ؑ8 f±A m[TF>@:9O%dn=Xao;ݽdri}ˣCG L0~o?_u:W7xa{w<ݢiy7ϼzy4-WipHҘ-Ĭ#|~@@]j\$1>R02#[o]zDe>Q\.jTvEΠ7Yhqs԰vѻM՛1?vBW>z d~Gwͪ[;DmS4J1f |ffTb"R (@:8R9nZy+['ՅG=Ȏ`w_5 %-YXqNXX,ߋR]CǽZZ8e\b'Ngvcxt]Ǟq,0s_g 3Y\ϵ0 L73fgzPx6d3  )4 'yQp0qf1b94q璀$} qh,ȣd_|nh@(Zi!BRy!>αS#/c^ a]*c!F!Ԭyt.fx8?EyMN fIΐ/lQ=b:QMKV(S%z MqDOƏ KGґ;_ŹoЕoǚ{ >[맒ÄKz'k[Ot,]3K5tBX!sICaɓ^hǮMOb&~`C1UJT>J}ny)kGk .T=7Z$H8;gzAiEahnLEC,09 a Mf9_2q I; I3\76hlT,dLOJ뗏\Z&İkZ=|˫[սݍVF욵^[;ԭٗz%=8ϒPmHX~f))WlYCLW)w;-8iUjXmh ]㣍̢E_#ýdX>>1^7˗KZm5@~e< 6dJz;=^ oV t`dsbT2B*U7ݾR.jKO7eRǺ؃4=!*zb;.4x!c#_2ٻm|7s\ix3oM=M_kFa%y[^%Ym%UMDX.棾z vP5>,M蕵l*wdr:Jk^'%C~՞#g@ʦNoY eug;KVu6G{p$IF83!뱟LMzmFڝ4O·-WdxZ%o#)#RAB$b6Szg6r+J(ST L iHIITq"{0FbpYN^k"踥a@CQV@oa= qQ8|ނ2ɬ,DeUzMfnwDrH$kTº8yurty  eGh8_^,WFSębKi*JR̘ɰ%.!ji0Sɜ0t9Ϗk#wcE~e T Rc?gdYdFȤq6;aN11%(h撕??{[:DLbrfU%SkHQ"I2S?bGw(\,Ι''G^]k`QGbH#0 Dk$RAT% ɽBO..N_D$0.E"%$f4$~0pG3(IPaT&)d;;7NeS :M 7ÎgրU*@ 1BW|'\`Ū:wKH<{޸8hEAs,rlyW?{lWgW'?uc@0ώN~,+5SQ[ˋFD 6[}yWvR7+R4Ua$u&)rg VQp$B Yj>8=l8:y}^tD jpG2qC`-&  ~SIKdؔD'?6.^@4_]srqtu~:1O_֋d*% ]i68~'( s@&U戦 Fpn-_ش"h(ꀎ o9t!3&VpؼS$*0qiʄd%== y> 8Yo%e[f`~h\1c39K3S9!K W_ DفOOW.yR.2{ι׏w{?vGBP[Fx{DDl~:..@іl[nPg^gߛ,dM{q >i?gv]l -+s6'FCso'Q؞KmQveO;kDͷ o6vZm(`_bg l)9./Ÿ|qkMt>h4ߊ᭽ _oוּ5#X701LhP}?r`D/7J*,pPo=#Տ> [ѷ̥MY;i-#'p#& T^x:ji&p)\4[ { L.}X l6d?O;QϖHCnw6ªN졓_ER6%zoonQEFCXo;~>wp9Z;uWפjGӇY:r2Njm[~'U8['']0רe $ڭ8䱣OUi|mˣ~i>qr))uo?zd~ TL>Prν7Z\j:[+,h15K!%f܀Ycm͛KRnh>.`|ςe6~(TfkV /.8%N<\& 0 rr*k)B9wY{!Ss|6גRr҈$<Ǫ65G&>5t4Yc3Hl5fAKG:#2ї[| ő7aYFKVgoKn 7h7}jǫ:[sV!lH$M~LmSqY+Y͔Kg~;kcfA"wܦ(˜sV"V_OLΌ3ꙀwW&S{jDƌ9ۆ|y7KΎz]lAn7wV[HOVS'6f16vvL<5w|foTE;3Pdiku*xiE5mw`JZ78) lqеCUq6QVUSϬ%H1J3TZzL$ V910sS3AXJ}$`B#af-mQF&Z2Fhy3L3gN0pJ2˔W6g%KuVw_k۶y8_ijV+N>6]I?=q'HfL0laÌa0>vZkT;$ vb@_Ҥ]DC>URibAP@=tƩR\LͺQp`ysPQlxMK|(T~*hb|0'UyxUT8ʨeނ. c9Zm{1kNhjLQzއGE8ppSpQcn3$=QZ{UvfY.AmwRM`O`pLVŽj4 -ewLN/Z0:.~u繍BmꑮÏGm>q/9cWw}zS~5ECeo!6(/sa1E1º W7'b#D 5k JײQB;q C~SLߚBmUgUܾ"ip :>]Ŕ0DaUPc\DFP?[<7Z6*%3B <y/1' l7:sܹѼFs;bHdB &4<>"^&\o&L질hr򷦂 )i5kCJR$%*y# /`k%} RlFĂ !wC`ZD?mS^dEt4ޝܪ4CQ{[*&]ۺ`Oܠ \SbV$?3e>D~ یxk+&/"fS>GT yB̨bdǰ߼m73 ~\0cVuRb VoLl/r**HO6m!2X_b5c8#v9}X7E`7[+)j*3K/W+.ޒR {-8]~ N0윓1>c9̄Kb@LJ B," 5A;TtbVsnmJ6˯E)*ػ*LqQ'&uZ\pq &C&HkA\93:ol6Fæ8Qm5@Gra"EOAr컆=sho2GgAi(msf+rҰL΢%O8}/y8aH蛲s ;}QmBa_n &bQTuB7cs)J[ 4EJץf]&jnv;엵tB-&aX2N\2Z. zG"i⸷WP?TPv{zޒk|ܹupZFWjx0hcgWi&= ge.kҳsǻX13u1UU :됐VLya<26qN]R"1$uq :{@MvFhM`FW&2X ]J(sTi =؎Q[E, k̆\xiiaqjRFc"ۈ|zWj <4A^nC;THn~{}<\{Tu&L{nfU'N.munΦc"-3 =S\L4\".ttY]=the=8vwoVBJvvW>yUx !aޡ浒!-׷꛻x3[PSbuk _#R4(5j'{We^radAz}-k9Ms~Е+Ŭ9OHhMJCJf ֈП7a$3_ g0sǽdBE@Fb 9)qp ࠌJGɃ^Ifէ: l%Js!gSEzo!Yv):;clI^uv6w:QT Iѕ)k@ ]{ F$],0Xh?oN&]^e'*ү 2N.b4ί|1c|}urȶ9^GKty*^p`[37Lr3k<~pak1˴~iy̝¬aY7ޜTG EQIfo4!2n9e T* KpnP)*i s{ \ QsW6ǗU%oϷ(~6/Qۖݖ)t_~vw[4W^1u-zuqd⅕­ u,ߟZDcISF=jJMvíqIBprrK==͏z܈S{I_@u3):pc}}sҙK3zjPSMjN;,Z:әglAE2Ť9IǡxbQţe'/UZS#r}=}5=Cb%EXԬq8fZ8ɰ6eC&{y">y?iw <z"Q@ėА NR0ˤ%+VZ})F*NsqFO8}RΩ}u:w>Ep}j[sL :?Bx]g&7˪)Bk>j4Xuk q99bzp iqE둠Bpq46ݞ]0PB 9XAdL:;#6RC #a A% .#7g? K|Q0 nBsgWblrߪI{ 0|YzL7Mn&؏t|3iZ3_L;bꂁ+avcG{Wޑnu;1֣*氽DWb+v 'H)HhITqQ2JZS=B c Ra')qGnC((0cDF9DE( x2harRt@-Fs&F8##R) J 2hQ0t p)Q Vٚ*p=+Vu@>5Z,PsL<:]T ΄R YCQA (HE g`1A "8 TFrm!GĐ0.qo&1JN`$т|Qe`|r)?,j1x"jZ彵=[LCAf-A\6{*ĩq lHrQ:Y H82bbC Qh40e/ݼq&|O#t余%̰=)"O=\5jʑWns~M 'xOˆOps1XRKqS:(B/MDӪgoj7T۳pfϛn~y ~?~03 @j.F^k U [x %Anm/[OpN^v7Hpo) }O FzW/UO[ђ]s`xbwIuLG X5oJ*yXld:؜ yϳ?1Z*{/G7[.UpKheo gJ" +*1R,KE^hEo9qhUO=%/|Lkյve1_ŇoI;s~G7+$UAâ R D˄D[Vy2{W]|,y֠דڷlDX{اTlW.NO9dINQdQq0^љ;h0^s6=3w3wkÉKԢ?scJM۶y˦‹뙒 yQ YFqɈCȖ+xBē SK.$zR[B8Kj g-*Rmu6 Վy2؄V1q6z6zލD Ø@lNQb1^JK9v"It$"9\Oz&-p 9{݁n:ٰ؞0ȳE2jc[Ք1CO{=vJɻE4Qa9FLP =K=&NZe0YqR&Q1`G,z5i06SS RAapT892n!YQR(P򱈑#sQTX'G)Ġh(iDL JEK ցI^p\6)OXh'>HMF'UU"e s#ĮcJ+ #'K%hn2# ?4 /gc_'ˁ`sq$Ҡ&DLGSBc0d"XE[#Wgy{1c 썿m6aB߱ooݡ2jq41ҍ͑Yy8' SApa*5dRI($1O}1I o=o0rM `L>%LhTerWIraD'3$ zc*hˠ\ 7hZN|P'sOU HES0 BA0<ឺ &E.WK!C!FZHZM%Q5sc16 H8$Ji A20"m.r:n"_#]֎ow!R S0"\X(Ia'@ܼTX/s{wV{;UZzN_v1*ѼrDkMM7$jJxm'')dSkl9w()@!j@"/2Ӝ`qp{s[z]oߤU\*͇ί ipGONDI9 R9'vZGJaȊȁrkT$l؞=0GgT  (MG6%Q”k>z`@21Rg' ܢaVszu $x*qhbY?pm&Z}^izr!eݻX퉨Qz_Lɭ+-acJ<+4~% (>}aVl]Yo#G+ļj}4 Sb"eR:xEHհ%LfFfEDFDK}M[,N0UVuЖ#3li23hN *H*6kMS9/75Y6 % eΥ*d!Mם[73ne\u\% 5YYFSGV+g+nRu~AJk {ʸ޶IFz 柊do*Ne&5_laȸ2 <ѫal]( Nfޤ8FFraeLu^C5.)ifѡpEXxngPQJ {,$Reh iK/?WeO7oy[S8^]#RLM(J2.xq#JűA&("Ťrܞg`F[BFYu59i>&rjfwQW*Bɓ ;\iW*)]E3)jO!'e$c|QAK e vᲠ9ʘ.3D T Ry%Hyl Ud]%& (A $ZGfTC+c"4 U2 :iQP`t+p6SVJap}Cm1V8hF5FKLDyS$^ɨQZ`( iMU;Z/#ӡ/g{,.2kP"I#sKA*\e9ꗮYj^1"bӅF,cj(%[: +$L:NCt:B*e^uc.bICڋ@M\ $=uIM {B+ Vn`T a;׭-2fq4KǮYV{T:M)CS$ʇ#HQI(EHGPbTܛ)# r}IYkv__O[$w)Gug%8..}t S;'VB;N6S2HJc39ŗꪳqL庍|v8  |~,lC4Sy"?I_̤CǿtsRfսpeXc4$ZcU [8YURmmq㛗ΞS)ev:dՁ,6T;_Œ屡l&/k{:c ).;wP:qdocuWEv$3ϧt}ye ^##Ry,vgVTM1%`ƓWt6fr ^#w܃ѧPTYa$fL H0 5$RypFk  DfD,gZ|֙~Lu۸čuwLN'__κLܰ[o>t6zT'^)4?R" x)Q񱃗JK^Q^+mu-l DERQU a V{ uS,-rVduZO~S^nQՅl然N>XaIkbOo ]ŒsP!ٻAdH˿_Շ}ɣ!quG, NNn ߼Mlj3gs~~Dkco%Lֻ/߻OG(+2GT$fWJ}ʏ}b52> ×^ynM̞W_4߾Mb KM\Tk _75IEz6rӽ:gҜֽy e~kOWĀ917?UTv!x L2nW}e=oB5#q^?G#fY>0 V0b2Fނ-TٿO7LNQ ׇO߿>~0Q?}\!Euԑ O¯"@ghмajhXu. ;q Sq1s6s7>W! vmugUijfWtؖзc dG+llcNҶ*k?~A f fB ,wLxQ3M}\k4'P1vN;mJ,hwf*8r ڐ{E6,`$sҫ$}0Bh0JtV!&azw{"Zָ| h*'+)4_X} K k)Hƴ'fTc|z!mOi2mc&5c5)ʨe E1T#qhD+ ,iL#hNH[~EVnBxalizKD&F>IjVtfzOs !p`@FTZFC!W!)nK1 Z6wVW06hXFYDi5{t7Pu| R.xzzW,,,<7h%>UA" ʱWA"L1ɐ2% S.; NdKMvܿ/xmVĺǰ:UfKy|Ō5ÌrSzIO'KsBPNQQZi)&jE*NIx4ƺg'O8 YyjŬ;Xp:wAL} =Hv yo[.f?k1ڃK֣&": ,^]0MFy(" J/A=ӣ6ћ\ooUVvxg>ۿai$2Rd*`)8B+B3ؔ1F?QdzUJo\8h!6f:z{E3,"J0.6#'{*,J]t֥ O$HQJM:(4{*u2HKCkFѤBƌQ%dTF)_C([ хk∏&6fH{~UDOo݊ Um?6:Gzeutt?\K]˜%nŶ` Kk Ie)k`FlmPX_>Su2ˋL6V6IwgK`tytJd.L_*ӌO|ohMҏJ99? -rF| jw]zYϠok7Nv}M, amSWa\ p\(0 NnSn$D hCd9e T* "5}<;YE}DzUҡ-'-SrB6Ljx_3}n˧trY/I gIh\]X:xapv2S K7ݽvVԶN`ޜȖv6t\.9icr+!N= ]b{ߛY>w4 3.3 xrܺe՞gZ1§I*1`UP5B5@:69LXlR80_Ѫ.vvM:цuY_Oʕؿؼ4)PN̊t'ޖ'-F@.kk Kr]:{tN_1sSJ! SX`wɏ$YOsYEH,g#1Xfף[@-GH>鷈>}JK=gƟS5Z "_ 2s{7"tHw/͆[ܸT9bzxab! m#AG=*hl<52:WϽ*PB )mXAdL:;#6RTSH*xS<09Ӥ׸ #f?B5~*{o,EV8gr0yYn7%Aw ~nKATU{zcq% vfz^]I+?~AS>w̃5BWqs8iA" -)`qQ2JZS=B chH`k p銰tzV{Vyce%`1s1Fd {nC( IT4(JBF#,Z[UVlՖX#ud~X*ePiHwe͍ wkXwe)BV~nWCvIBXhcA `VCt̯CbKZJj\CSe֜`ՀU TS!^\NF0ZI ;΂ThɨO5B,IR!Ԅ/!,K;CB@=YG*Qr MZؐ$x"P#8);fׂYL bo)|U 352D Ws6: YB# K&NEL谷 r&;yUEKm&7[Whj5~Os%{n*xމ/eAوx>ߎw=m$$'  -K4ɼDSi&oSI;GyZ\ 6 XOz 9h;ΌXJ;Ƴ0īVi`+&{6W2~.+Jkg(Ԭ^W0UG3J2 -3Gd>JDl9Z2vkru%5G" ;4<)wLB{W֏7tWZH:!TʡPY &-F$U?w`.f{Mze⻾ThfSön3X;q;TJ[50|՟&. 3gI 8)Gٚk!qZz‰Ӟo8żl:tg <"4XFo^ D˚Mt hA:ICzlu{6CnGf7ڠm{]~RYkONya69'! 3u*ԑҭST'u⁗;>7FFK¥`D@Uԓc~p)pթetRԖUIf⿼4Lj hKa9RQϲjP ứN9G :# Pn3ϞAe՚K2 %0Z O/*ñZP=ٰ%92}M\9򹼷]~!n/WJjm7|n.bTjւsʢLEQr<)*rd֠܇`<8=\U'uvSJNE*sԔ+e)79a 0]PBIrve~Kbϧ͍ y; qzgz6lIxήo=&o *3ET`qoO,[/xsEv^%dhR*ujsݘ]ѕ&NB ~5V#.G 9V;E) $x"1$JfUi-i:sp]CmpV8EVqT%Vev2#t֐ Dl\y)u=Z71 HW=m#,x掇GG/#ur A惾\7͛>kz晠Zb3IaN%dV }ŹRBb',l:tx &Nv|w|SI"#@Fo*/2>)m^uR5YzZ8p:\:a^Z˪G~~۶lF.x]:nQ={)7;hu4hWkmǾRcVzVQsKw>P:aVU$UR lTHNM:a>۝0ӥyP(h4ب)͵\vpc/);8J2D1ͷ 2ٷ;0,KQ%3qe$x ͜ҟOxfפ)*M˖ 6YdU, qZD­'Mv;|C 9~+vfpl3V|^es̮mX}F,ւO?*v.֜|Jk:?#ms\WZ:\V0eY-Z1EiM0ՉTpC9YE%oӎQ4=x<4ejkk+7eJ/Ƶ}H֛<8y-\WܥH^4(h^T"9}q gml>O?*HDf ?\>'prwUoڦζqڸwqܹ{pm ,^Z{MO86LJ՛aGYGN?ˋtU:|GajC5@mR o>oUן_X8}6Kޛucg _23R>X) =+ ۖStI@{CćbS `ycd'ؿRRd]sK Zd:MxBog5ÌWTnW WxT*e)o (5Khj1|\Z,t>)Nބ@qbʨHUYqm&HB zW?y,} ,BXeŜ5$NoЃ!:h_'ԾB>pveV#R3*\a;BR^kPĨR!caYjnq-zz F( OVd)P/pJ[Tt, oV;YoE gJGͰ8`\, =[){yC ERf$^mA,, =ӿ3(DUbyɧ|JF+YrˬPĊ(i =ȋloI^yL&TR. !8XA^ ӽٗzTV,SJi5en o.z;e!S>1aQ`o=UQO~KILɭ $poЃsݧxB `'(`J Z,t \ E$ޔH?B;^l([, /(L>*Ud6{c d%x5Bp$"ե uBpk7XB|DMJQ:<U}BFj3w|-#%tނcFQ;1by;'p1pj<֩9J)PQAuB M^;N@R:dKrS=P.sm@|[!e(8=]:hDU0 5Q }[.Qj`@rbb"_bD\ZcվčEF+zyi|r5b*[/\B |Jΐ+S]VísK|zӍE{/WR#.rur%Ѩzau0\Z+ľUE"W(W>_R<xYmB(jdd _s|{}~ysz}{X uc=?Ջ3z #˫|voR#zI?{O歼;u1ِ2>gCPJ  ئ߮^P?:?LbU%9vvk(ZVىG?6eg_}@̗Sʑ{&tP\]/N_#IO9FRhq]УqNT<#$^t ;j~MOo8噏:Әy }60qk_W?[X=zB/ϯ뗫P'ǿ6xrE28y ػuٔG<(woޞR(?~1[iݘ{aǓگDگ ROk?k3 &: W0Ї[o+_5isQz󱇂փ/YZ?$| -ڥYgeѦ)`d-?3\j1 xhςka1vW0`ă5gowELcRLgRP2Y\yu~=N.הD]\|?+z\v>c$n,F&&1d4җT<9!w;\|OӜJ~Г޳/@8존ELLx|wYR+ϖ>\O߿|y]]KoLrvZcU-ïy" &MwM?^Ma.՛ǾWIn/UaVb9a1_רU(̲w|-F%Բr߮tyɸgWW(wKhҌ+z6/j'_M(0ҁRVp;&uY,*7로 mAE,<9q#$`Q Z,΄bGͪզ0Ί`&Y}>I~Phl`0!=$55i06@;TP"|2}$)K]ը̆m+ΦT|1 i}f{ɟˬ+T&AKYOdbusOPg~(`|X'ߺBdHbRktk9S:pq~LTg3j˵R ȷz-_7[tސ}zu_ӣѿfgTZ퐇pfWNd(O0J7aAôZ&۔(lb\E.VML^U1%e^ܨ{lM]WϮ/FoGPQ#?$fhJޤB$ .KXcワuܝ )QnjJмHw(RSW|>Ve&HVO͚clMuJ]!\Bz}b><ߴnV&i[Wʼ/FqX\zݷkя4.Za5C$ΛKb|IWMŐbl KP!FS/&4 ?d\.*'ts.uȦR{΂,ZH*wi(l0OR+2|iine zg#M1scx 2l`iJB<_~%ZV˄,sup1JM Xי+&Pogoo} &<g0OeW笩 U^zcn?ޢhUP޲hXu;E3qMn(r1k1sWݦ𦵪2`s谡"N0Ay3$@wCۘ (ac3|ݥc3ĕ&W}S7Mq3|s+7MNlx<{ԍ.wRI%SYdr?0t:_Q%)f@b5CL-?~><>:^a0_L49Ba"0^uNjɁo꼋Q9G7ݮsmGޡ!. r &3,<\*mJi4`n50Y+@}V}ܑrx74%1%؟-bV\_6_rx,a>Z_pvN^HKhvg F=?n:h#˻!ۉ=O8 vE>5y7oG>+4ޱa/2'r(Oo7g-OǺ@"#TeD/ Y"HXCɐ2EQ.fMK뮾D_4v>u;om٣ljݒm|nA֙&\_4~ՙ.i2x?6M!.8\"ʨzJKHF `R^05FMyl $Xd%8ӿ&6`}Qll0't&ct \.jթɘjIt&x& }Vm Ua|T|@ IumUT/ITMCYZu;yjY:(8K7oiK 0I(= 'J*wduknu!wt )>B;D[b¨oDc`1rșs:WLffNFmoQޟAQz>_h:p SBӦD/ nGm߬1 i4JNY R!Y!;Ӡ>vh>PrnY;*r[4-2O>e*-l)3s~"Õr~ySe)Lҏd]Oek]Zk]ɃUPn%'fl:VΠ𿋉5BRO*;G#9b{R2}F3^Z休_^& A*UQn4c:ldfօ|y >.ow, `w.weZϰ Xu2nͰ+7iUil`Y7ޜH&&E% kHȸߑs̩T"҅BY{Uy2 x۟[Zx@s"k]F3$?zjzY^3hfb>bXֽ6+V/u퉩A^]L;i*=V[M]2}%;muY:GSBxtw Mpf_yhÃУ+<5@@ \z\^?ý- ҥ ≦O^N:p|1}r)֝$>Nݽo2|=9q ipsS\{rˮOeޡF{׺4Gw5fn2հWS-#'RZ/qZ#-9a=Tq#S#>tzKTH$V!:ǬH#T`ŐEBPEǤ[;q—`KۉG{W١ՃWkstjWkX3mCavmw~FnӴ~}"G^ۛ\_%s؟Q]4խ5K8igFH9GBK6ҀQ2њZX!P;{z`r[cb"1(PhPeBF#,Z;tՖX#Җ#בaQB DjT(C:H(t VU%XSj##OUԠj?)f)taR)8J)d NZD1L ( 5H800P!zF{ yB }g!58zb`0 TU+ B@WHR'P@m8ԙǂ[Lm=h 30$!EcdK``3R 0N):;na0*FfG>PfTW8ZQs5:9S[DENԘ- Yǹ5bi J-Z̧= O.0 Kچa "cVyomiݹniS`niq 3+ĩ!N=8=H`C!A¡ 1w E V!~CXi#:b#tإɻ%̰:!HIof;ԗ]Uayk Kʛ}L`“eAc.G-w;n28x۰a,$JrLS/"TgL 2˶;_g%Hr\G9[v} 6XÝyD b&W#q +xFGeo/m'L> Wz ~[ }Ɇ6dw0.U,ٻ8rWm*XH.-q9_ʒN# HHjےoRWŧdbq;w^c]_;>y<ֺ~[_?Ke\]oha5 k٘/kE~>zajSA2mh~ŜK-o5cB\ 9>š֫hM4ep`٧\8؂ѧzx!h/|ut1~j/ſi9Y\M׹ח:.O{|o\|_zzv;`l zk|r3tz!M+ Қs[ STF!Viv.mCi,YLԬg]ZXO~zWz A{Uo|R$C^vhT;XE u[*M'U_wTSӔyVmUq߁~6BwZ0bf|cэfՌDY8gb<əï/ML9\+j~HVJCVϔo\kBN{0/ ?^uߎ~YXiS@9agf(*fB`޺c#voO/wZ7KlZ_|~wt{~'^AƺKfR:BJ< Og5y2ѣ_̀Ռ?jFPo7V7m#':y,a * Mb- }#?[ۉZ=en=y9%|B#;-N9"^VUL1VƓl#lm"kMld)%9D xIR΄fZRj̹]^w+h?\d~k!O;{| ;m:Q?Ō>|q:VW/Kj]+oxD$86Vl)6k+Z #y)e)d6%zIqG-VkRb ʸS9MJst^F&9B;)J%/+bLqZM-&:tW.gKL&BU Ś2S,!q S}:r[?Tg`JU"~Lq-4Rv)rva8O4H lƦV) D"WMXl|5m W[aYͽ\Ӽ31+?34a_Bad^P߬Pg+TָۖP%_c6KT=xz~񠕦 6^nV/ǶA.nOr}ؖ -#sPorghX!g)X˂"V.a1Gnx'{-]Ytaκʝ6.jւmwEO ]޳G Pf՗?Yys\_rfq7h{7Μd.Fv!k>*iku ak,9Lܫ*h&>FvĖq9_پ&V;&R;ՕkAX"W9dc?7G6fZmY"/ڋmA^l~"ڥ0_N*k֋$2%O1')Zr!SxǹTqk&8)LBO:U.VV*kRpՔ}}ghc~54MP\ ~t ɧPZUQCW8!hU-^dmZc5yv5۷AFzG(*JpjJ91TcϬKUU&%(L*ɦ5fC ЪV<pH1sC/۲WfQbt#ë= ߵ(MZ"g)MD|OQ EɶT4HM:bS}~ :=s oo{ te /c\萃QJF`s.kMIGQJS` h("bxdJΎaA_&,)0f=`Xc)Uss߇ :Ɨ~!e1BDwѱWD"͇9RLdJK cL)&e! d8KEB wbmq)N:!KȖ !iFu a^I[6UT1B7"5nۼ/Y#6KtU`QXߎ571$De8fЪ6fw=#lm+{e[X`)`O-HԴl#Ha ğ] 3`!O: d ͑-2%HG > rcxJrKp] eJc&%STg,=:̚Ik#K`yZcatk-U C OS p $\yX  ï*8r b` hS fqh\xAq%Lڲ )6:47՚&YV#6.j d7G;c޾O=tϒYѓ, -}t+ Ĉ_40i 2b#MΎ ٱJ"52  Z]#.-Hx\e,WZC9\mR+]о kKZ~bkfŇtչU@~ IRJQ~Rc'X;UzRW4"s{ OĨ!4^AMh0ڱ[8nl?;xR7ǚ7#+U@R vW)%(]!@˸}6OK B{J09k.(Bƭ帱h5RXZß^p@AT 5aȶqcF 2t !Қql%[U CA2 eciFhq2)8+ ܶ)5^fvUH(2&e9Ukt0I.9 "k!P9ruCdƍЕvZ9U>k RkPRKNfUGg0`d3Q0?op8i#s4-fkEDWВ*@Lv[ :r\묝xZYOh>m' O *S]ψlEJ݀}{B CF1*Iz+*Nx$ 3)`3 JX,<ƃuГ;EZY∣O #M OëGAzͅu[k/hl4*d^ @Z傑C&d0W c~ B /NКyaZK͕ 9" ?y1TA+n\9)Y% Ɋ\T,(TDcCsp]4u ({c\DNa AkJu$!@qٴN&ʄ[zK4jXE^ieqEHˤ): KUZ_wMB,-&[ k4h7vV9'iA1B /A`|ƾIY|5`k5J!O?LnT]3jpj(BdJ-줆z7d\IyBw RV-ܢ 7ۣ"F>Y j˸P`e(0mA#5}f@4rh0JVxH SB*hb=~OD'UVc3m6p!ȸ͈.I &h@t!4^줓b~4g+N#*!u8_b3 K7,+=)h I3&j }tU aAp)7$I۽^}'Oftrr\_62}l:oeDEd(_j FZQ녬9rzsvϡ_t{,99S:K;tΒ^Yrt>`QYrYrL\)L9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9Sgԙ3uL9S+gsէe߾;/6_X6rlx\ j1AR Bb 8x{?o~`Ikd݅#t8`wj~yf<'oO{\O}7%glH.uqɋq^ ?XSjLZߘW]ulo6zPNwܤA{;^1yMn_]=Im v-wZ{7{C+}Z)qy~޵^}w.TwQNmUY=_\c4P<]y}F?{ql@_H~,ڸp~HksED2JM~(RCrZ,ÀMq8sNo<W5/z^ &k@ ~88brqq;]e誢~VQFiQ š(Fضh4̊w`V R6= j#iuM]Lpu<l5٬O1ı hZ6r+wbQo&@)]gӺ}[Yhv]KDn;^s+@'W =4[~YSH-~D?͎D?Fa~o'y!O$O$O$O$O$O$O$O$O$O$O$O$O$O$O$O$O$O$O$OPw.9ƍD dX*`-w.VzXvp4$:vU&cI\!s]P&K:Hw:m`0Vӆ 80_"^(DUAOkZ S+BcHb M$B(U,Q@mE!U$ q]bX~Tjl,Ҡ$Y\xbc !e(9X8Z+Ev"@+͡ˮL66DJ7i' '+ $vfy'2{X?&xfp33LJĽ+ d.IJfd #SW3f-y'39g:AZopZnWŤ{ :K+>='>G0?I0y9x/w_?~/:s@wl%n<=l6+ڕގ7hߎ_W`l:,9~_k$_y\ m/pHo\%.E*JosZiM.2VΖ۲H60tx<ɳu/]$gv/ٖ_W_Y\CTCTs2ILp`2Q"UJsod"(tjp1/E S |L&2CPͤ0^QXGCs9)~GuTVr+ܔ\[ʐ[y.LlʝUI5&%׭ 3r4qH]qOvZLndq6/8fs%̲vmϞ%̎V~rblm֋M1r--˕ EeB6*A+$9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h&9h4x LU8ht\-BoW^ P:C]х !s$f.Vdl2u&ԕ4+X̌ҾSɘ[=8͉ u Ե,,P׎XNKW X)U&`'3di\!aP)A{S-YҭPTO gk;n3XB0ˤ+JIܾ!^;{IGWsv=(Ջz/ݕF 6ӥŚ$sU <FeSRc S<[ Nܑ'Vԧ3͋F[G$t ̹$56x&ϐ6+lhc' R R R R R R R R R R R R R R R R R R R R R R RwgN]|b*hn>OxET|S#fyn8C\p)eH ^#b R89GA W;Y$]k<@ăyVlX4:smV١fG U-ZL}X_Ժcw񒍅 U0^Gwz{}υ{1esN^ iܷ(8䛴pX7H^\: 7vkէdzȍ7Ȧ[kqE$ma"`DކɍbDBna ѕ@  ]\d(th,2$:+]I5BW֚[Wis+es!9:Y0tƅBWtݟEIN3+, N2c+L(thycWRQ9ҕqF3]!`N LUQDWgHW Z3m8+t(t{ Q :qcw duGU2їeI 1 KhYF#JC4SUAi8Z:fE9|*qye\#+PϘe._V}f8wQy0çy좝UsTѡg_?.Me0-S]KbWp3 wg(`&U$0eU˾$rU^yp˃ID6R]-^2oa?9]m؟:y+!,C{b[Е%ڵ +l֡=?]J8ҕDWXp ]!\C+D{ Pʾ]k+̏0+e(th;]J 9w? a&f|>S;?|)hrYKXofR֙jC3|%~8+ LQa*$>)tiboXZ?VdpYNb SW%xR7+zqy\6<ū>i˱RSxf?ch2Ma^fKo/;}Q8//0h-gbmI'fS`r8]mW~1:U4Fe)@7ѣ=5oPk?ǟ>Ds~r}͘?M\eipxW M]s:b=Y0Rnߧ Ph>~;znC/,}:ZOq2A :gYKq풶L`y+{yW(=gva,-:O/_K:"ϵ2vѠ+`6ϫNܦ@-f!\kSKœ$.WR)> X\!E,jP7`X]QcdzmQb+ƒ5}S -36NcC˿v@w(e?e-6хufռ`g%SWOO@ߏl JQm4ku1Zbπ/[,[ơq# |2FC^Z Oeo>i3Yb%*]bPl1@+-(%[ m1 k(؞Jwxf0CDrcB(U>W|tHBκk33զ>i;Თ[%FBIV<`}]'}/S, \`  ]Zz/ (DWo\ˮwLrۃvO~Vp;Aۡ5' oon rDWv=2]!`'+뎺EWVȾ JHU>BƆBWts+8" š` J&B+D+E QjAtut*$R3pM0th]+@':C bWZ9\0tpm0 vD8ҕ5+y8 U=]Yc|jMH֕Պp `+Dz3(9SDWgHWX"K^s- 1ꗱ 1RTchQ|Jdha6U &YUСV!JhU=UEHt @BBWV @m]]/0{}6h;Վv(ejt剮vzv"BއBWVKwB]!] ˌaGP=UV(WDWgHWJ/E@t%bCWW ]!C{t JYcmHvRCWWP ZwBJ[ cB>h;t LB `á+4}+Diҕu2]`'Y0tpu0v@%;]!JM HWX|U0lbFPN- LjRI@'}l5zm#7''m3N-kUB㴏 m՝h'ڶ`UYHIXF2$lDzj]wHRd+ξ;5Z}Ԝ ]I֮%L׌lرm W6qK4I[==]ͮ%ڥ3 { ]\Y(th;]!J%ΐ$gλ #vEWP zw]΋ {]B,g*wBF]!]in[1/]!SlW-JwB=;8teAV,  ]!Z{ Q:]#]Y! 1ﺢ+l(th;]X{No$:]a͈b}>㥗ae,oRbsjU>f}aA}ȴ(N.S\h%I)2UdeʤK4K>/ϽM>Ye]]O/~~[xPb2bGᴂ$g4)>,/SxTbz\ C-ˏ%ǁodU DRF$I晲Ԕ^okmI{@ha`wn3l~ƚȒFxWMROzش%9Ldl5U_}EVW@7׻?RuU.i_NWW]3VOJRoi0W 0b*]< @Z>MRRի+|+DC=R\~ڂvGJN8)yGp;zc W 0'b*{UGJ+Rvp a%b* K1JRvp䴃o(agz…~ϟ׵ro+j0U?2W1W/o` QYlDƜәJ0 {-5Ja~&Oܨ( AX(. uA:h#NcJDЁ Cm/T3 YƐF8.dR*1!QfwD 8q2>XWN,q J"څC5gEKL`( "|ڐn)zCQ9dQbxhT+C Gm<MeB+)IG“$/$0$D'8)%FWre#j֌I4N~7E6:([i(=vOAf7_޿VI_Gf_>`24ɘ6U1(dʩt<?fWv=eKKp0}. 88-n|odcˋ6 8r7mgmеk*: nWGK2npIvǮPPG~c jsCvXPQJ {,$R  i~Ja~BʼnIJ/6_] u-cyJSh>o4ΗN %6.l']v_+m[,ܦLqb;7t0)j a;~1.L+i՘~n6w៫"׀vwfσ&)5sqn>e Z`&0eyZ]RRZ f:CsBGm:Ia{7Zy$%H"v6 F4fEdޡ(zӹLĶ̺3FFLw Pjz/mI FQݢ uxqZdw?w`ZˁfUQyUFefo4Eo-eFȌjX)TpG -pCgF+ܬ`B`L\YB$ƞB R,C=cEy`e#-F%-F8##aQB" $< ƐȾ( Vs X{XuAcO9CģJL(5;i0^|p6#YHu Iā QH0ra 1& O C؉+"s!4JPCVa,(1^!IDGCqwqSkq+/!'Gg>Pm܅($ OX^"EHeg6{[ަ#/o5}az;y//ms2Q-povu:Sky3 fˮnff8K[& 1kNK'U0Th!biEͰe|TDi mx 8&ivcxm6 YJLL:/nĵ}82ĩgSi 1qA(,$Ji BC1 !P|ǨYuBյ xr `DkX-c^?Dj;&YdbɍWOVN'Lk.)o'v9s+۴|ON>dT=!tt ޭcM?~%6+l;EA1Ҷ^9Jy"0L[=z]TI[Om j9?݄r7 }Ɇ^>݄QvW}2yχ\]zNPs}?Zx;Ub%[i9cçBZqЯT[hKl;"1L' BuU;e[DQ cQdG5PZ3C'zҥ0@rIP 7ұe=zќ 3$0J;ƳPW7kr aw-1J)Y: ml5f iT-Ϙ0.xd(ϫz P. P;t 4( (4Y0Ґʀ qR J$ 6bXGyP^!jn';]JL糵Ba[%axmgb{VI`.f+jWby)[Q'i8K`Ga~ido-KBe 񁵙1dQk!FPQ.>">YO%*}?]E9 Pd !LH1 CG@ vvM+ zX0RN`WmcݱOTM K"u7y95Ϻ: U*RY%)  QIVFp%hƑ/%࿹|ܛs[ ';\[@4pLSst5b vpn8y}]^*] ̻2L_`=(Șҳk@Ы6 MM9um73gz"dzBt|*8No>~%xdc#pc dQ1P9Yp0xp>:Jbt>%p%7WRYKT[)0S Y&1H2B.z˨v@b9lmd.*N6!F6 cQ8Ea8"It$",MÁ߽ܳ,-;q5[~˱Z@SAx(d BC Dc}D:uLiE{a伤|4#23-V4:6m6{ ,rnX DP܄ȕ)(DcJh U(  wzF: .l1o^<ƾ{-ב;^t/'R Nի3,٧aA^LY7qkpƔᙒ&dQI(b0us#W[m)7ǛL]&MY\i M^6NGfov` ;1K<.P)+--B|4K%J`!xؤw˥n.’g60ᗓo}z 1RG"D *1u%G&QJkiCX-{` 1}Y!:|6IWA2&[9)E|;ry9mͬ!8Xu]!QJ0CNsEoì_@ZH6 ;T'uX+yW pig"D$rj) ^;-E0A{dEid5N*lĞt^Y30*„b#%PiBZ'X6gLY58X؃9u6l N9z˔01p-P,R\wXrp_9GAac G;K,紴g0%ynK:(ٻ޶lU@a Xtvl ]]@iZO6d-S3|v|Ljֲ3)n**^gxxJֲd9ʙ.7&0 ?Q<%ySbOu񔶉Ԇk*Q "3>Cwtmq[[Ҿ(hTdsvy\7\B)8p?Ct1}6̇QȲmN ]`>}mտ> J5r5"q<~km]T?ۍDcѐUoyYݏgY-YavXeNgϚ&}hdkِy\àtc=4|gLۻ;50Ǿ-p(Kuy, [iXRPPŔ k\'51c@BQe1E h.HAC|D|~Au[ lsԻ sˣGt3A Uʣ4FINRC(*dՑ.LzKU^f~cBok&z2LQÕq}߽qdǍ-u12,7U\,ݶ4} FAA !< 7J€xny_<[W%< ωNd1,{[o,.lz9`Oi0 6A٫oR0MAUgY`2 5ɩl 8䏇z!)һ :WBeq<%[c*Ȝ2r {M5>[c؀ 2<}+z:̕wLrc .hM"؁be@l#<ݗbAd"Ct!5݊>=J"6 EJ s)e Isl1ȓ. UvBpnDr 3 @ c"q,;2-Zg%q'?[-MfZEqj.iˁ5Kq,’|sS00dIXC4q6%XE,U^N `) p+Ce/tD<=%#{AfIڗBd6)Oދ0 )|agXT)эq/gI/~]\kK|mװILҝӿxui:_2z⼹'{ iF0fyGȇ_iPf X7^39{%h}I64W8^Gv!LHOKM01K:SgҸQg|hhE,Ue7*NZK3ʒ 3KT}+Y7\D]LlTSǙ\!o??L?7>瘨W{upO 8$΋!G @G N=mu MmӵnsՂoM”i_|O ! n]Mv>3,2dw}B[L+Ę1H\hS)~~m oydgj0s 1 w BʊK]< F;ЙV®H`q2Vl&T E:Q^RD*Ԫus3^~7LZ{^ K:@q]q_9xQQ8&,-6"g\(r)$(:5a0Haxokc*)7* @ppi7DAwZDV kc#&/ "-,ͲǀTX*9eH:id6mΖLFuiiٿL:O"ïCc]^ >5MZCƨ[48 ]h>i6zƆ+,}Ӄq&ZV϶ڡ*^ +5zÀ]L@.jP7mhtVgC4ɞ]Elnݾu{y͍/b!iۛ>]Gy=?|sW74\1X+9v#mY֜NZu[{e>EC7a()Y~E j-WŒC!?!DhMJrf-&G(D5*tBN h%K) (bbw6q۫5ZhGyʲc'ΰT gqE2G c` yiCyKi¼Hs(q(E`v#ӞG)Ġh(Pc$Q[PN}NGU!CC~h̵(Z'1)H僩K;3kA[P"HA M\927ZEpN!WD[L3hlGch6s O8\4T*V3 Ue@zz^JR!;hЧ Y.RzT7*/OBk3z?zPKA5`,vCothh2 {\wo~xh;Դq$ Z0c]L`#Kd68E+3SFo99+['U((Ll aȖ ]ι_j[Zs<[閭$l#D'u-˜YYןv0PӧMOQIϠ7YiW=)g#7fE1 a6ԛ඗TV0'< )ݱ0ׅ~>ZУYTskpΔṒ&QI(b0Yv֟ {i̹aH7V|B YXᅭ[6s@C,0E++Y2l{3max~ƮO&޵6r#EK6E9@\2X`l%į= bK{dɣVYZԏdկ'zKkFgJDo5yZM,̗) 8:f}tҴI&M{JMZÞ@)LfWRJOu^:uUϯ5zhz:(Jm?tџCi43,5 =4o 3RMt5oGh`NL ^@Nz"[[_nz`Z7SG]F(Kn5އ%;Do߄O#1D ɲAp  ZfS\I4;FS3mb݂5 6ՆNIN;(uV:/}%N<&;هfj%Do_0mY3CMíOj\`oC9dk 8+H 4)Ññ)&߆I/!8D<7q8$-8cF,cpj4\V[z6t}S䏽־Yϐ-xg1RĨ,you4h0[ŜT4Mg2嗤p@2O&h%C!舡!4ș~`~ N_~MTcLF%&I)moCkAjj><ݳn2GQ'AaJTFV9ҡFYG%]?HO [he8_mUv:]x6KFIˮnw<FWٺ[hX`VW[& O]xKʺخxt&|RX4`ƫ `-4iGyv3-݇l[p{d J8Ebq J5yH.x<= [ dv "7Zd/і q. g̾Z}E91"ZBf;UYg52@ŔބēHZqyXe'N|$'^]*,LꍤЉ 7fλ_k»+")n_f?-vᵤFcO]PrbaZlù4wщ>k[} y'go5G΀G_73x{]JlniW~ۏ7_@F[eo?-^p?w\AHN/l~LO7ߓ;l]2pxߚS޴1w el3 Crvݟ8#So L>e{9UMҤ9Bd&}|RdC)fW,O~FpgxH5Y'57>7gdͳ6ߛŖd n>'Bnrs,:g͇U>ٮ O;3fI"_yH"ШrɻG^/wLsο` #%ӬW+6 kqCW]61W_ j6ؔsR W53 [Majmy;hs0%ʮWYcւ7lIBN<$?kogY;ʺcyhp&GkphrUj@=-(= w-ʈ wgt/OϬ thqhɕXv0m/g:еY@tBнݻq\Ѧ0xִKZ e \PV~-ɸ&6zqڽ/ 0bt0~hwC!QЕAWbN=GmPTDW U ]RBWS骠8ҕ@k g8)3]BW0&FOW%DW{HW,e-S/.7|v~~1{o_-s8q@jL4Y w蓔dS"KV1:u,-dmi:B"vÊ_ư*L%DY tH@VGON .$maeEƵHXEKUfƵl}Ou+pOص'wˏ<`Q5 f l,ʩ@Im J0(:xlIO' |zvj߼㬶Th.F)uoUMPɉjTPj)]OP7o8 DjAe8oSijB⬦@5>:E^Pil}teP]`jP ]`NW]!]iִH2UX ]ZptUP =Zi8 `-u-tU>tE(-6lʍΆͮrVPoukti1W9zң5 86F2%F-R`Zj֌TUKju5tUjB ]^(զS/9lz;~pu|WU/qe(J+9c^0PBUDWj骠l=+L7KU ^h޻*(ALttp!.j*NW]#])Jq-x= .k+BPv'z܂z+kBWZRr= UMkWXpU ]l}+Ñu2`sۯft`Y=E {IQS* _FX%aqQ!Bu?v1W= t- b+Äec+s,GWzV ]ڌ Jc'z1tN=2;~SU/lzLtةpUU5tUࢭ`ltUPr9ҕΚsVwUjVɱUA]#]IPj+  ZcҊE9`›NzB]<6}TPGF4`hjH#YzF! QYOʱYf>FLRvZk ZmiC01l3ƠaOYPԅf.Lwӹ.4$#>7:RtN獗SUAXaVQWT:eb31BVR<͎\ x5 6Rcw) J4K.E9j5m`aX5tEp%&*h;]MttӢ""*p Z;eB٭2*+$0>Cҡj+ ח S0teG]T6U:c@N)CLQQؘjA`Z[Ղ&Vբ6C&vK%艮^ ] ^2}^K+wZ#R̻R=JMtةj] 9v*(;2;Е0qY]``xWWj+B;]v$ /qp|J#\7Wח45(ſ8hϓl@{8Sc@z7&w 9c,'NE)MI'3A@] .<¢ l5z]zqhTQ܎ݝyv^ktTctc;1ټ6K]%S)z#읕l 4 4W,QdZ||:h1bSv8:ptlQdG$E}) v$!4b+?.,={x~?[Q1@#,)*|#!ӬrL4'WE]mRŠ6m+ܿl^2 M~3W$jmB]@&x/ }}oByn寳ߟijI׋.f@H] ׳]>8Z? 82m,:%ΈS$D@7Za=c. D E6^놷oYGWN8]uѭ0pxh8uZYagO_gog.nGJ?_,tCK;.OnV>Naɷ]uǍ ?RhR&ţM W5k~cm!M~AݺZ/ґہk<=OLt~~M"{.HZ9$B'@!;-iepe'k7_SyFWe?>n3݌CY!ho@Fq[#?l%" Qb_(.GDJr=rW t6dg%PpQ%p%;hQf+SM 7`rfuE+[dɂ֎"LN1dwe=n$|gb}Ѓ,>6V3އլ٢M5$b#GW,]]`$Ū2⋨FKON]^'/WA`AѶ17#M (B& CcNPxi/|ch=4\@c\\e>Tj9[3 SRNyLYxd\B_oXm$nbMb|3(g;pI7fIc:98W?LC/o:?ݻi k0g=w{zԲ֟*iJU1]rW@040>|zTJѻ记GZ/-GRGݏhbl zیڝx &/26>;fJQe6vo'i6dԈ ;wR$W.@Bom$]^L"2U/Fizh-btqwU\]բg ף5DNU=Ƅ.HOE.C+*RKYU5+TC VLv]Erꊻ*vwԣzwzű~%>igPKHHe2iܕ{Nl> b`ӻy|0v煙x vV)(F:[ /4 ">q.g"H2kn}Nk.56beb\3\(5S)! a*#Hh33cV  c*oV};X cKjZQ0e7 ,R>k g#h+F"! d0)v0X|9mנDI2! VݩGrugJ[@-ǭ/mE*I_z T1нoh^+DJJow̔Vqw`̐h60Fk{"TNA ;Xl%y ņSVIe.@&)BF/B0 73)L+e0rP(DRD X LT;$JITʨ %^1;yE@)!5V)f) j%B 99+ ;#CB((-=Gg-A@&A,[~J{ѯKΓ;X73NDuhd8q"FWp7; 9P܋q\QVg8 ;0*Nxp0-&>Vǭf/Ko $Jg4|,<=E%ėh_A|n69Qef_ *⼺t}>8/?@ 2$+1Ɩ.WVo|My;,Wt=iopCjvު=~N4,B2}kc;%8Yt5.΢ B -nǥz:io%k&FAXE u+I vJ&kIW9J!yE)EW*4}:HT\ a1䵋ZJ/Cu2edC̐ KEmo%K6|Aƺ'W7b~%,pR7fGHN9Mz??crPi_V3WR@JCAr"pJ69YePGt@']`:Zf 0&PKN"EɮABBJ\͑^\z7DZquobߞnޱYWcuHԖmcoR-% cN֖jhYiB8qM ԰{z_fg&FK:ıJ5 FX_Zuh uA8AQ(m8LeJZFf kᛱ̡zK(-tSK]RgR>ev\]D@e2^:Ӽ*odEejZt0kfāĤ4ۏn;fH")LدN̆f/Jp_$eڃ.\<OƄlYÚ(:3-9PG6]4+̓<6 Mߚ[L'D3xTeo3m ] o5…98xKKU)T=]O2Pc/+zάwVSQg壆u KWiЦZ=-V),c. jM `{䎯_6BU-ee21$x_%4r"L`^Uc9ޡ5 n.vA/3kfPY֗7wܔ ֔sƽίgA1Dq89h$NT 7{>f閃y-Y[^r|]0~uo*+{#' ycl~3T_x g@E59:yW~H-މZfltY^n`/*^ބƜ6RF-8J\8͂Yſ6>;WSW76+(tRT /4Ak. 6Q  E0QK_(CJfHtp{*,dxiPWX`o;WX\`xH@SRNqn<[WQl]s5T^$@BFв g=$&T@{OSZAͽjR~#&VJ\!!#8:ٛه鏛}^+WkRk#o˺87򳍤-ֲZL0{7Xl|g&ǘN+LM~`*.-|y |к+1&yW nJ&(=Kܽ79~m`ۭ;_' _C)3|Z 2tG^b":fi.9ᇧ ?hMf(n߮_2,i:erp], gň?/g61,vlNC`޶S)Ծ11D%!ClY2Vz #dExN(5)f$ޓfFБV %ד4kS&[,jҹ_8D lu=&f$9^}KqU- %[S{_Z5R@ VA ' M2uu8Iysp[||&PDy 9n݇Ce!1!{nkܔ_}NRȻjS4'9j,Mǯ0u4f^qf66ƚJ&ޗũOfBDFXtiwE_ (CHo%P?ۊ:oSۻ_z~gJd&RVy/XٻPjF}$ΰwk4b L3@@g3wsPѫ+P%^Of}.zTsGQ[s'Exm.L'pi(''_<_lZʜktRW;S2!%4FpDB0tVNM>*v{gt%|1Q88}-n!9B9eYcu%Y5%QEYtct8 |ODBU52*bD$)_iҢbF;UOJ~ &ܗY ")4L,[>~Q e^TgnpgKKznR =~b 3$߹BF8y:c[=VOio 5t̊ {̑xcOz֝<} vT`p+xAF3X@dZz86ƃ}ܫK:W6|A^W }yp[K->iG MyU夳UwUNd`R12xʆ}+jUM(p%"7Quyo'ն5ȃT28G*MߖT8ȸG+ņ1.G+䬗_QÐWa iiHP5Ut@M4')2=l߉$dXЙ i,龑7zk4( tj,fCRZJZי /@ 7vmHQԊN?D DhFΧx7Xb+"h?CjNVS %0 3а叙V\1F?WQ,PEw СQ2[11Y@| >?`fa),#>ߝm2q8/4u?@");4i{]d 9׫mMuFwQDvJVR0if!Vo'+Lz+߶F~m>qetάaWtΒ[0d BD vók~ brJì?"HnkTA1}lr Gs:v{&H,u/˃q8 26RKG|NvQRXl;7`K-A6oiN"Q_ۥs&unf*X6}k) nҩQ|3,f?-_C\T:2&Z>=NRI~X(cHʼLUP}*PE`qRd+1)y̦=dsE%bfD dF= N ^=aB,qns%"Σ;`9?'BTOb^36TcT΃!DG޾ΐ7)fs9rH0;3t282Q1<)L(MvcY!vέMefniL`3Hwm^7H&$>5kDdYMcH iI:!G$YKĎ(g8Dvw9&!-XRE*3zfɜO_iRf3!&PK eSlERGKg?iJarS|4=/8/Qcu*qMcV(SX`W`.pug ܟ֏ bln6VFFeͻq" ӭ9ۭk)zUs ጓ- 1FAh*s]ةEq?>?n}#GN|z%}ׇ1.~ qyoϿ6 c_Zs:sjŪ0o?? "Uɑک'52vl% SpO}K 7AgCR{=6ɛ[pzeiN24WE4-E,l>G Iq?hK>kJZ""g7k7!FɁF7RNP5hoQKxNj@1TJsܚ,8e`5o>9(>=ȟzh6a 7j}6֠ݛC𾍐I'n0%!D6.V)?qFT+.qZ mp«CG_A^l`vsP#ozls2fsvcN99d'dOk׳@aY^U{v(jt'>p.b4qV ʵ4$j,G^^uPsDU&x)|łJ ԋV^rQqֺGOe4hE dV<?B1Ӝ(JQsO:TsG|?CO .X'g*F^4cx6_52Q.~_\+sn-'|'n*^4(UW#ܺK92vE9+ҔƵj`oJvNwQJgTe9wRKL0&b5u|}_dߊdo[RtgBgE\cX=-˟!_O[_϶ ]NX%ݳCoOZFd+` WiLxMņg.;u(j~<0zWdar/Ǐ4ݒgJ_q1Uc_s-JU79V[%VH $8Yܝ}L}JYŔbwdUmI_.Vf>6jYǸ>)d8=X+K0k L`99Y\(Fj>KfM#)\M-a+7&)QSܽ|ڴfQ û̉<&Qh,(i©" Wch8$J5Q0PIeVZЈW ٹN?H VuU]h[YgVsPY>vS^a{zJ DK]K-;k'ﳳ?,ȳ~y(@| dE!/3Ebo$Ts "% IhQ%INqeV$I-@T SϿ*] naW{Xc5D+{g8/ؽ4qZ1J~*!nU>fT.l~[ d呿xI@KPu-AGvZY<,yfQc(5[u 5_sw % ?7< ?e9Z gw~a}Dk}YĨ n¦fq!k,cD]_̬tGA05ZQFGHa·4iRZwTMW$|8&tjed0@d\WLk i6Χ_L/^+̪׵̯ĵX[;2i4#IayjRP Mr06tؙ̋.}Z8+]l^@6Zx_$Zĩp_n4&_f>߬ Nϸ%J#Ðq,K<=XQWߏ!SB9.Iwuٛ: c?9b;= HK21f.E$%n+&.!/{2w]!<_Wl^mr' Pb^*ywzc*iq{l&oߞ5Z?eKMsm=_~Fo?i>EͿUW(þZl=mN{F.{V˦2 CHc3>ǑUF TGG.PԣOgI5-9"4+O3/=;c'Y'F.NRIz?RŽ4jLV/oX\9̷q7jBP~Hjo+ .)Pr( &t~ͱ GO"J`S7N)_LDLEX-?kc1Qfy{}Rz`,Xh͞vv%JD:KK ?InSyͺ)|ŝogKe ףOβKpCM~MhPp,@#e8RnfǪ)9=屈W&4D:axH,tVǤ&GM@7Q1ӅJʌjLW'@*@F"HByNC6Np*EofɶA &Yab  D4G~̫+YR Ss 2PiQâNR@@!NA0M1QFATTSqm |x*ɶ٥'ށX F H*J5پm42BTcW| ԚGs/os& 6{gI y^.^ŬӠmbQ#FQyw fU~֨O &"03՚pFH'ak8p&7YY Qx)Rۍ=frؘ17W ҄BPg.9,=[HRTѨ>e6m)^K~oײs"YD4,b@ )4?IQ*78e7\'K$.#lOW|BFY9@?#5" #IgJYǵڙI~k?IAF@b6+3u&<)|XmN@Pra~yf~˗5362 j  3faq"mV !B,XaO:O  '`E/Dh8<1mW,F=wq Q_ŭo[ ].BTE2% @@iPc844sL*akEoY.);2^K7Pd]tw\ a6Ac@C1ԬTCmI Rv$j 9`|u.=>/~* 3o4kQ^_=qT0"ñ K4J$2Fʘr)˴xҚ2v1( zG5bGCjl8gi+|Uce61lF +Ť2jD>J|~gg*b eIAI}6;\uu9>%OݐFmJngs\͋!ImY&qI P1YU2@FSB SbԷ=>'\(_~a H2EQ̼ $3[J2!DD$V- /S:vI 'loS\I!O]}|=z(Tث28ԠcM~q$崛dW'X0G}Ql8DFaQ~9|R Gݐ!(DۤxvL IXcY6NJIb{iZƺ)r.{ΚPS%MRT#4ɐ5{Km%$m]9A9 _⌖B^h4>5{8Gtα2Gfp2R$R1&4^2H1jmaUkf_uWh0~ ͢oL@u4q[nW ifJp .D':8w^w')$BUiM'J8pV+*X*85L)LG|S~ 1 ^$"Sh3gt鵰hՙGQjX_k έԥA u*9]I AO!?iI>~jYIyM4ld,:P޿"ȍY~Xs.bK'8鹆sKLf3`Q5\T!iBApB~=qȐ.OaiGCRw&x{諦r"PI?KD4$:} |&)Fov45ca{e,uB.niuAk .XYQor̔R(=Q68#%2D]ZWc\s!)c}?΄[ u*a+ 5{ZF;ZoV`ƤoOdv ":P"0RGk"8H ~J S\fo({z! !+ }o1l3#R _t@h@9_S_ˈT\1qnA5˗N9]\vŞUaj.DJԉ|ta",HoVJxkws7liq$6Œq-,G5Kn~w)B^ˑVOт4{Rf IEMǷ~C]HcJwKZ׀!MgM8Tti@UKD׎Hټ%6g0+;?aT|NjŹ`DT1>mfCʫmYw:ߖgv:,c !}|>*`$p\gJX䜥 ` wlRhd5}#<MEY \2q~0 ?0o ϸͻ7_/$D$αo|eL;=e7mB;5"q9qJun½` ǮF!WW'1i⨇vtq\S!0˻o(ZMk dϞT(U/r(e ):6MJ7)7Kx0ic(8Tʂ7~yRr#9pA`tݘ?jvnVK^`I(PMTk04)eIh희Wdm5x 8k !dv.-OęWN%<i$ۮFu(ιh4]{G! ̢Ar v)*ZOh.±0AI#zXQSlQ ƪ*O #\xKn1Gk[cn5^s&r9-ҰXS46)6dQ(Q jxt[R\Y+Ju39@{:ԣ2X=Nm.NF䤚O4$@9-%C5xBUU[7쟷3i:b~FYQ_Z9#Rҍڇ !qrq&5~%y&wV-O- G@P ,}k({`ۂk+IҖ[DI5:vb]Vf/;f WZ"x=i~ݠY("DgUl#I{<{(JwuøbY[OgE}vq9wYryV^ ð:o$w]tf qeBkԗB|fSG %=Tv؆z6 9wF k&Qiq-6]r;? }tnNt,'bXp\Gi H~)7FI`^a"OlAqG?߬ D> RGTӵzgi*A!O_[F3wl~l@ 3#B qC:}M> 7Jġcttҁ.R:E5+ Θ挸@Az! `:QNGv[P j^A%kR$U(ޥ S[ #5P~mh4u Xx~H޵TA [T)LJq}m=-q!r`w>VetNq?~~v3yjhOX{ )L/W-TZx#K0O.x>fr9S(фh;%=bՇJfɤ$`s *Z [tlڠj$%g il?Ÿ0]%< Ғ#u"@(*pH ƢuLPP vcPcSU6˻  t36|^Ota ✧`ũ+vcB,(QcY(ρ(|9jv7sO^&9`|M+ت7m1LjMJ=D㖿tnY7$2wf\񢼽6V1u`hU!<`T>2A g,$C`"BꔖvsF@Z:FYJ)Ԥ"rpw ZM(,Hʎm ϕ Z-FG}'ݎƪn էbtɱ*`/Z 椹u60HrFm~pP0;>bԸ"(f=q^qjViLEGlQqOq>وڨUpUԷ3 ؤ$}'Wtb$Ha=KXamdWmq{"dYBڂ5NIIVU(OA?GIK{&3\': 3]눼|~ ƇTZxU>`|fL5htVD<$h@I 74Ġ"X:`RUBsi%r;[u`?^ PL(>eTӗOcȮH̪mLrPЄ%#R a ֓!LƱ)T%1˧!3֜m8ǫS1o ݯoN'6JwOE+⸒AJ ,.Hd%ȃ}f5.e lNqiL9K:qЂ JnOWt+ʹAP:Yo6)),RQAư0Ʀ3+n&?P5#RА;jO{n uuP_M}eZΕwku׊èAciA/1%1c &l RTwQ ?~QH9·=hog DŽ˾ה7%,I "Ӄ#NzYΟsт$0!@߻Tp~oqg[RcG Y^ )IB8pCP. @J)+?2NV<NT!M1kЕ S pOr?QEzęȨ`!j^\竽gso@;Sr=~>^Vk8;f[l}~M:k, 7ipLw̷z3Z͈V!8C pceevFWs2VKl-V7{cZ) =fBaP区0N.0VK2:4m_˪=<jP ݊wFƔ2Ru,w)4_耈 \)嗜͜>#iѽVMՎ_k{EYyV!T]ĸ;YS^0QlAby 7}(*:jQܒ5V*-mj8odG?$Xzbc@yt ^,״@R b -^<:Th|?ӪmVގOZ>WmkBƁMap ]]Qn Hsk0|tN3P[4ξUXrv2eVޣՠ:OŬOOd5iE-+-# 2qidj0^ԗ8NDk9hME+8)^ PC.%@B =wG걛 Y#i _+X`4ES|B I8=ii/B,rk49&=F즡?-5q]aG.d$ $hE5WݎõƱSY{a>o<؋ӎ#Xwk!?nENt6kOdsb SGt8nAA8 ,W(0O6,0vX/ڄ5.T ]ͼ@b`c13⭽"j{=)a^)|)eo}sqe`[ڹ髝Osm uEh˞Lpk>xa}SpsĄx. :e5'N JsrU,rcᏢ; &߽+҄fŎA4!)Up4oE (3'I] Po/Z&J<,*[1LiPAY.!"116ՠh="]<4A3ˉ_q{gpG< M+ibHPeњ)3 Eʪ)ghǽ>Eh*<]:I#wzL:eUq:9[^+RBk~k:Zw@jvDĕIUǷ]3UgYԻOڮ6}.4f7>11=فM5ϕ@P*Q*AV?O`m'2ÍN'YV(TKY! .t|E>{x$WmQQ`= DA""G3}!F9<yQa%21 kB{pITj\ǠvO֕ cc\}ho3%%^ߞg3 xEaDCwE/ Z\SUھO8T U Wz_l/R'H3Z/BAp~ũZL:pF%9=y1y{f#|^ X_[oonv[T7,77МarˎxP`3 \5sd^݄x;Ok!P4Mookי( ܓ8./AC_ !;n>D6øELGSEA ,N^G2^k3()czk1N\WswuԦ!#0up X(+rx. ,(B P1P0-;壣ac Rz<Ԕv.bXeQQ:Z fZ6X"mV55BԥxH)=`gU*srO9 p/i贓RqD?ݕkJ%VK!L@.0Y&9{3 HyPt*Bh`p=6(M/H=a=q(k i Xh5i'_m#~8 5>ߍ<VJ$Qk o71{q#*]s ?>uVx+Щh`O|7x屹7?6mN.G BaCxxoz by!qa$xXLYtƕ./P#NEVTkp/Eo1ĢМ.[I2Z` M-:]Y$q`e;ĕH42ob#D-7b-Ue8:h1*]~v1:nZ% +cZ׽j$!J>W|" ^YTM#TCʏ4(f]l`R|Q.1ڱ!mOֲL<9.rWH+"GPQkmRh Kͳsș4O=E=SHSh=^ Nr-u < tTOE`XlɰT63E{).,S`OHaQIoF}C"@pzr:RqN>i}vASgLq-GqּC'sP ?UUwPz\`A R4Wl*CcDŽnB¡SPzb0(~TtZٷ3TN¤8;ɵc=].'.[w;)!e;t;U ޗ/w"!,.r c8uE{hhi.[y-&74tXas^~ο4]VҖ\ &X;*˪Z"T[Ax{̑¥#] AVj{D7/ȎӻT ~jU 9k6ntjWabT\6wAe㑎=џD1kxL2n>LQX?x#UuM, Dg`'G{ T*N6JCwāe<)ڶQW4vEgfIЛh8/U+(DZ-Km^" JT{&R i-WvYHh k3O㥻y~2'zV-'2 cotqA_^5Յu*X Q^5ݒzl'4h<?25SRV(gP ˿B$!;,|']›"` PP3q XYS(fs{D![\db?i`a.iׯY8%&xf1Z\*NZšP{b\&B\( n,yx-_oId-JƬ]pԭv;oKcN!ˤCSfȜ b,D@r?SVw=Yc:@V.wv4c^X/ 4VTUzs1S, ^J E]4ATdC_$zk[s~;!kpZeW6, DeN؂aP7zilZz`.4X՝ K++{zxnWSv{(0m>1Y.=4:&X6NrNCL[ews:+jp: z0H[/ڐ!ӗ-8YlkΠ1B13ltbee }$mM'9 `IdUڽ]t߷-oWzCU7f_#j2?e2m%A5}g1ИsEoDkyUqVM &3"Xh5ABֲZ/ 5,0 ͅtӎEx3ƍ&iiB_&xCqcvJRT8ckF&6\ i*nyW^4)"#[ȱ臆9hbj&\uηf*)Ӭv޼M^Oə+n{NZV&g^r1?qpK8By}l! 7wW␊*ap(@jmӄRCBZ,v%= {e;aģpӔ*:bPȶZLU([DuN 3&masN´̷b3SƢ5A)li[8yX3nh8b!.C;v&#%+1e'vboݗ"? ]{L-SǤ8bOeU'!i1Ji.h'\ Oơ\I~MogƇTc'^kĄ씲=%IF'vsR9RkTFE~so3UL)Y&-'yNd,P9 &;m3t)DkI(a׍g1N|t{a"0i_s̋*zqfb(xGL_a0j0'_݉߇L‚OauA?`&ٶu갖Z}>N5-ÄX]Ցuڙ5-;ƌ2_hYMx J]i]ko$+}7._EXC/ |ZjZk;HQj{{"YuNU8` 6qX;\h$0iJzjzsg 95p~/pK6re;f{=甩DvҔ2R J*ٕ%c7K\~7BbbF'aP0th?p|S h+;`߱2|2GΩ({v¨ulN_wp;~\z?@2tuutSȆ_<'́g3۽߀xkl0Sll;4@C4C;tqLyA%#N$(El: ɰlRޞK~M?kpt`a=aH']FIVS.Z0*mU0cE}=;Pk`X[3oZ+? d+9Mvd>e>՟Чٿ/y5=bq *c:^Cޤ2/9{ 䋓w9禝+TE)Ar Lj' m Q*Y5UaNVJc.І^;[Fpؙ,/1ʬZEH6dy<;)av=Cu4k7u,tpT ˇ?sNX획_= O!U&] !Gj\nd 4Sٱ;@vpEaud:{R}l['Û:g|&[a?S^#j%>/@SS?oOMm u\}!F9Bz < O6Ko|z_v[. 5X:C@ZC7 bɖh?^d͗gZ]RŹĒ[opSS8N+%?56+J{.aP'IJ$u$FB2MˠY#hrƪUQ-\p5VRAІ-֡?y~S79Kg ,C ^6>q0ImfY29K7ͨ`d#_+<鱰 BJ=~k$!'` C[o}^zC:׏ͻb e]ӨqwrL:>nB2ZxP13@?vѠ5bRŖ|. .-xK?0+6ȵ3jryw9yY&|s7?^q=sЕ4-0B|{u\[|rX~Q*NӸXdqw]˳3_pݮ|fKYIi\@7L# (@!U㛇,9zY0N^ N;W["eloF|^Iy+xUqY{~;VfWGտgZ,i@}UW|߽ X]E!ŧ:>>G*l0B0AQ )MT Q?\zf<W&-.,%gYmHʫTe˵hѿ.[Ad{=/z_Gw C"]>=k5?w|:ӷ ٹ/$E;SiڄV`}9g|p-DsR9CF66R%qd /mSj|Jwu.A3PbtJmSԐ+Xb܈3Q>@T|T s`+OǀJ}sl6"AtDMaG]52`nccId8m 3Cj#Z6ficZdсKLAfx E c"0\!1:Yg] {KQJr?Yi1-}u㑑Ÿ:It$ӑDOGH|XkKތ1σ7*ZGILCalN.Bl2+@RȦ"[k3W‡Ʉ$I*dŀ,cW14 shU5/PM;'+mK *I0#{@ -VCX9 CIx=nhC@Ɵ"}wŁj.)" ˮf`ue\aJ +e T8 G{Z<_N͡Zy`0{yڋ4)AT xCMi vԦ$ ITsƭi 9Gndi<Le2D/!z ^&c'<\ai l+vSك[HC$L4e)}{eJ甠i~l½7NcI)AsJ|L9mDy9zz\3"븓 FiQTQRg`:@mշ@ z 2#Y{S %˚ +mRVX 7В J?knv- ʡ dO މ2D*٥T֭hLҔ+ bq*V&M!ڏ9]wA]sġ8n^8v~CF:ߑ(]$^ +;NRP*B cFߤљK.>;Y:S֮?| L` UO/DE LI%ehl-f!fOdӥB ܤPM,WGQ~e:_?NIJVUmH}vMġYN-s&64Ɵ̼YD@0bJd˝,zu܉4jM[[O1Of6L€Ÿ_7*zmPawߛmߒr'{٪62RŨ:iUu^jY7u0"[\TM#};pM7EUŞ̉IϦ N*tlR+yxL([@ XM/ŷPm㶪l(֐թm)k?ݣN,37s(ds"1z+WLZ0ji 4!DN(2ע*cqƶvN-0!әqLԐAdXLMTFjZ]զ\ )Wcq,+@NYg\Ѷ8z-dW||>\ߒtvz"7]ρG+9̥k󑽼L榫\fBb1[ܹ'M//r@8:^._|AP+,_$Ζ|qU7?tuQwMwCDi֪ Q^o}GO= jScO"=$X M L#ou~hc^ۺ<txā<.G|G=:>d[}˽!XoI|%,^iyaH4Ac[(s ,oJDLO/2ޯvkaatxN~Bg N~=4!``ܕ#ocW6"U&pە$yN)Rkw)svׂSg%H2SoNi? Ay~N3=SEXAOb">#Su^6>5XiO^ 0S_t[‹q<+^%wog*h%~{':cz۞V/'&`O[>]t̯]rR0r}8&̿w! |j&?a-X! JOamTNb"ωWouW%LLeb*>gI8hzrOROsBevݜ/i$T}R^!dK <[$qvq9D^ ?g^Ei<7WuyzZc 9O RnfSb}%,>(D)WNzYh3 g=znyd=&r#ecԋ7u X˵7ҋ"yl?r0Sriijv9~eB?YK蛠?_HOѶ2K6'?!QȾvOEAqT ܏j٢Wk ?xu2!*1xaz븑_ Nd }] ^3Idcɓ-ʖ|t:>G}`GETY[ލOz$]4%Ї9꺓$b PTu'%Lv5>O@9'}*熣r1o߯`>e|vy&ĉ`%4Żg71sJ%,]L*eJzg딍^nrA' gՇr{u_b^VIڙ0z Ϛ'GFmJς>D飀jd팉5m7sKtwHpMR9K>Sͷ< N/7 s,F9@5/QNC%jFTNtrϒ\xlnOtI8C* Z _Oc ഌq*&$Eb`0Pb?G"}wlL05z.vh@0v}#++f0P6;0ۏ fFp e9ػ?ҁ,5'Ix P3gF$IDEu`lz5ଂC B7{l*٠$c7PK{QAI{mY= [A͌#=1٦65 %YőiL Ǹ4t4w /ɤbk+'KA[-]8^@>ų(֒fWRڹm0<vo%!lF)b8n-Ee3*~lƙGkhZy}QKTw~A^*E1P7( дaumXav`"EZhW"Fȵ_79P!2f $gmkk8W< &Y':,pn 5XR1 1'0*7@DïjPemCUQkLYWIBݕcfk6kP *y_35--o2 b !XlP $F)19tɅwFs0YU` }41$chH-7-'Үv3GPbX0xĖivNwa}!͒[F\ͅ)(1[#Ĥ/h댺 P-E(2bYrNP*A4׼ZJY={>s#7헫{aju/JvEJq!cx, k)tI'0v԰WQL.:m(fR/HIbͣsٽf N>m#T2  Ҧa^PdYIO1jp4G=5 Tp]Qi=d#.QF^fSrdozP\ k1J 1.H r3gal rl7_e)H!'!h S?AO|ר?s,jɣt='4cWчTS4yeccmT S UcrȾ0 Ŧ<^l[NEdJ%Kͥ꣘q^'(+0ׂPG!jr'Ğ{xÓ)8vk*ak|f>_vbalXqKOFw96Uw1ҝJ!0hcb56hI( ߀ 9-'bl%>O"ov0fmu?u?j27狝: wmmȆ (GyMDZ.u1]i0NY5l8$25d߸+%Kb%+*:a12$|vb]S6N? .Rh"wl$;4`? =܃1(ޮy1! Ov~gA7O+"QڙR>_oىvM~Q_ۑYbtPPo4u8{釿WN}W?_, + O???5_pUo *a}|4=?+FβY!z}'-45?&:28yC'y7{4`9?>mt[W[1Ij? # #^@AP>`@GE`baf4)\] M}7;h{&#fcΒ:sk6Ο~Y?pUwy4POa;"Δ 5sF)̌g:PGfɪ^cZe|jN!#D8ᓆ7SD|ilߕ?s@8c}`&IX@$M(F ," Eag1 `];VCш^_lNcBV='y6G}lsm„S;qqGC<*J$OSf'aKxuF;U[ s/<,i !o!HR=5jآhv͋!h\Ә-,PILj=~Œȏg)v&~-L-W08BN_Ί)ؘl[Bf֙m )S)/:oT21ۍR}ᔆj@ΚC'%P?}Ѹ©D$i`MQ !>MΟwN/ӥNpXgː#갋b<[JMڔ)R{AH-vz #bQ+:JHi F0zջYR睄@VZBlWDIUHap \f[a$QWY&b0UY,Iz{J1EY;\ 5"e lwB_Q ݛ,pCN$.NNYx#%95WyZ6F/EZ>TPxޝ(>)(@J 3x bQF ǩ[mtK2f2|uަ["ADIx1u͸] ;7>՟>tQ/rM_>O%TN )<{4$÷fPjt=H ^u._\z xUx#C|e'2n׼ L\3{W>Kr'bjU֜1<[ּe,tiӏ֍ ?)-!0m$a;lǵ/Аg԰JE_%o[fK-?Qxbԝ1A#'yk-?>ԋ~㺢:gQQwE=@Lfɋ 5))]Ms9+sH7 9t1ӷ9Z{zf~eY*UeUJ-K;2J{ Q;<b3`,πu[-D] $x$a! c gzE+!}R2}]O}rnwyz/O-gO5^vwmIgc_BQ}*pxO/ͅn-9x-#ӏ6|XPos1K'WLjK(Usmt㧧tG&7U>*!ȢFVj7oypS02&=SYIXpWjRIDyvER޺pR y ⬂ ,Jf-ȵS_}>5^zshf?Q:xk{4*ӂχ9U:ґKucs)T'AMC+8%ZG7ux(ƔS5oEv (޽hCzaiLJa #  &JH@ 1h)=qb+͕Q+[%!>d"IT3Qͻ;DSQly:(x08"á[OE[ٽ5‚E A럯)GU?AJӇs; F~okb)i =0c-#XWξsNS1BrZ*Z/a`&WIQq$UF1[5uiVt.!p|{Jfb#i}|[Ux<`.rid3@ëNЧ=MfXf/G aWs [ɢjR o6dkilZ~aV1"Erv=c=Ğp݅2Sh*X,P11T +eD,|jC6IRg ;"*8.s;f:Z7QgAKy1_pqE(o[?]GѢ #j5Ψ}n 5,|P[@:O|qX{R7J/TU-pȼCӡ<=Ac-U'Rg2NHpQT}mɹgI͕؀N2Nvqfclgo.SɚSFx$8sLNwG:#*FCQ8rJ~ 9̦bqB |?n2S#<ehN$󁹛O&4@Hs6d:+=MO.1W631eD8]H!ך`+g#iPc𖫑 8r@sq8D NL35 K ŌɁ-&B}0a,vlIL8x(j?.ٿ1Q{/8rg4 sҫ-#ϕfV#`ם7R9d?|i_3]ty-Χ.8ufz]6~rs?}pW}7o7ׂ-)Rx8[oc2?p+h)*F"U(&]e$4Bl|&y9svKLqc0* A9U?j__8 q_ӥ ۵Mǘ y* 1  q@C3EkޘZ#L,lӄP32vH" J_bT񫜧(ٳ?\iϺ2dq?є% c@ՠc&OcӦnS$=x2S}164&XWÞ#Uf8M]j$b5;s"=߼wGm0jXIO_W%0 ^ Eqd=‰#"5@/_#a6xe7`na΋qge+7[P«+{ΆQYQ;S9 <8YnA>u’$$)o֍c9ߞQj'u7׭]v{c/n^{^Bwn6:{fg$;\5l_e[}8ݶm7쾲 *oܺ {$)cӸ;)7kg2=o?Zc+>tñ;FB45\UPL_h?cko vr )G܁د_79gK$PR5w7bZ{qrST!9ӑc` <[=]4:t"l)jD.s6K.і%gIxBSz'u.co^>uA\y2 G"y6O %DA fRNr.O#޼h<ţ-5f`\+O'O+K<ܻO#װk pN%^yr76nh,W]4x>Y4O<2`sh-A~xoG (g m2XJkK_2dT@:rEw>WxY(s_58!W!]CX5<8L]gTkG#F#`< 1(487Aˆe]1[7.Fmy|36)Y.ڊ]*Fo> WjH@]CN.;e%v+A7F]~ΆQ<'bGc',3GũTWb]8  q%vK‘0+ s(<Üz'L0Q??-.돯.[[@/L޽<64^K3a0HxJſX_>3wow3o[\͒yf?rvTmfM:֞52")x 6 %S>tL$$ (37㻳[_o_#k S.DiS-4rLkșꩧQJWZfN>F] ltYz5P-hֈϚ-}s8Gіמk?÷!ɍLR/hIғg;,Ͼ񨕸~Iu7`t ߾XJ'z2Cr;hGqv͗N(O0ښQSMg؎:qЬ`I>r%E~q MD5Z.gK 9jmkr{;I{t:.үV1"EﳾQ.=$Q]wԩ!Dv>ojfd ,ȞF&bȭ(4!Tȱp|%t%- >yn؇T[CɴȰC3M|rdEe8)~nX6es<5HcX">rfcPYlfe@FV.Ols`|-b-T"FP?U암MmQWJV:ᔳSj zGBY^zf(O YQ(?kTs HŘ2ߤ"&LfO1sX "DʥGsMj, [d6&yxf9oo|} ;L~/nE8ugZ9y։zLSBS69w 8a1HoTKP)vOd|]0%Rbdx)<%cX:mיvCb $WRfFĆ% ף_d[|A׌ש}Hc'z>ۭ+ 'syA]3h(1ʑe# s4kDR2,TL$ͦgq%p'%~%{?!NLjyc1bI)B2K,)gj{8_vq#wwUwu0Es_6 c^m&RUF!5$E+qC~*YRۂM.˜9lbSMdI3tlK)0@1v.Js!4P @lP))j' qI-k^HAWءsk%| Hy:⦝4Ѷ HPX Wc,<ϝkVgr39Wk (¨U!t%0L,hkSBmTzˡ|3!F}Ҥ1{ZU,KW D"qrX R,L2^BCF@&nq0O휗ٰ;׼WD a'vBX,9p蘁h}t,ħ̳UƥL+ˢ.NO]ǑkBl /",\meU)Q^R49V,987\2&&&. x{ { -N2 }I+@5/<š%!Z`O.{KއMm)ZpX52{i}\ #f\<]ŵ\qt:@ꘅUd(Q$ 9o5w53.KUNu!Ok3,9ny9;H`pΒ$.4WX&l`Luv~$Vr?T=:o:@$ > ~t.XjM"1 {3ޭ'ulXIǒ  SǴ$nxw͋MW i%6&KV)2H0g,2's̆tooDϲ[K2z $ZF4=O7?ӡcpM4%g-=;,åt.^$.x1btalV<ؿ'6y L~M{8_(,8r\IDi@ڡi V֖ߓ"vm -;`rq`]:ny9S04W'w3_G^r%3ZerWjɹ%bHtҡQAߠ| @s~[|Gb![|\bNAfYp΢YuWd !"m@00mENjgS:-M7׼لYK([}6Zrh6zA-Od; \% <{es38rH֧"ޒt1X`9Aք9O_ݜO7a>bU4)&eS3żEFݟu6ªSޒ{?i?"E ys ]!w׷Hi ٪XGfX&3y9;*pN:@ʧ-ٿOehܲĠ! ѦVvO cȷ.Q5D;o$LPŰAܘBcD'U0`X|Q`$ݮ0;qGPf@b<:s)7k )sf'nف88yšJ@{Wje:ڵ0>e/IXw,D8X@DvDH|KoK{(ۭӽ;lֽ;lԽ{yւv^v*vRܢu䑗6Co 5Sytph߼};9]]I~>m?z2:I:LP_Gvu deTV* LF>"eBldFQB'甅* ]}ј,B7?~h4_]hsJӑ3OvQG)u'RW}#1sa!8ﻹyMjd}a;7i _M7]-3rr1y #yJpTV˝ɼ?KyCϓMA֕m|?8;dsZbW y[bN(Rߢ5(oŴ]oP nxDVM/ôd,UlY<6U7Y.\-Lw-Xc<(]KW}OOnT~ŕ ey <@SSu5~=-`k鐢687h郁Hfd+ hTbVwE,Y6*œK/ٙh@ J)xV9rhq9.V3lܾkFJR+:C}z}"89#X/2wOG+B])G"xsHɈ]ġjx0S5biJw.`Y{14 V d)wj*ŽbZkSS@@z{FH  }tlz_wm{wQ{1>T9j.>Yv6c±Ef4RMǶp}q9meEX'/NvD%&?56eufMbh&1+̓PĬe;A^lWHҺW]x ХL$W եKe6>wSԍEqcuAF |!0*@#]37?Q6Fwև̕SK$nho'vՄc L KŘb4R~rWqQ'onk߼/s/ -PԱ4MuJj A`kµ160b6Zֺ+Յ_сօj] j ]-_'M;&'  \slEx㜛iD#d)WVR CVG ƫb 2[ hV/O笵ƒCHBtCLxlEP$XJ "t*~|%=G{PZ ZTC;KϛhAeDh2.cc0MDnϩ7]2a0/T%vJC_0nG&d5Dׇ9Qjnʑ`\0)[oLh̎mH١ؤӪO#/K5n.!(9/VH8u9>h!Q 1DQGnqq+bQױ]z;ˤh'xn!3IKh?6B?ɪ sVVQGY|@>YSuh(bQWu:Nuc+0|[WX3erUFΣemkWu%+N>[1`x1hcUgt3dg̛P|=X1ee]'BN?_^${-LɊ,o>E5*joڵOg#c۷t.>&1ՏOKvwӋC~:9=D~kOL|vϜ%yzgkblFX -N>+ƌCdp} E<}ѥN$Op*UU ]QanD $^UR7/ k,hM_n^\Wy6I~gӖ^[4g/M -Nߎ=dv+O]x@gX~bh(xcQH)^}@Ⓚ =3t#>h;"prW,̓DBs{Qv:Ndn'>X[M;'7iZɒz*$,}ICVU+EØcm4ߔiGc֎+wmyfɧ'~ 6wG0DZrY q!V*ćd=ζ8p%K]iU3OwGe~zdIYVgZ(>[=k/x!:6hNdo=kI#yT(e9=~W ׾LE<<' :K_CE7XF<@R$Ɉ-#;$,>"gi3dgI29υ` (9eي;8{*ObvPU#XQm)Sm,lĠTX~brr/N#\ӝj}xz)1D?*I+dY$sBYQ;Cֹ-x-ԳsAe,nq>[ c! Y^^\$5mu5E ㅸ A{1:zUWهR'r}wQCۀ 0;o ,5vW4,vPqpH[gb հa̓{@WĹ s 6eRM֐ /m" Ӷdx 3fonnGZ=_ ܒ4ζo@]{-1/q |` |C}zazqX7hv%+eO~}b?(+ T ʈ mu-qxS'+eЛ!>r4t$ 9xC c`ө߽c'6,/6|Y;_#Sɜ^@cx6R}u8/Vw'Lr^~4o/ygz8_}*~lm  gw_r@8L(Y)"}4CIf;SÙ.] :I!cV<9[Í`&lmŻmfRifF@ H3[8q7 {鴧Aj2`S^H, XcoԎO,BB20Byyq{m] 9 +mCv*O:guI#˕{,36pk&H_C2RU˩U\ϋQy37SlH;^RN9"SqӼN7vWUks@_&:& ƚ'sSɨ'};~~`ϓq1C'D3#2#is @Pį0P ,_e*;BeRz4x-X DEeFta4 'DS.HFRX0I0Heb{w._c_/cJ~;e;^ ;qgfفNVA"N Bdf-[Qs9cg+ sY6=TDUrGHIVtg~? n٦3DnQGW뇵mnVٹՊVgwX[ U>۴MZ c< ֱS2m #RtA #H(Qf2c,FB$d%qYP5{sJʷ-}jsתּT::'iF'$"gL)VZD\ Stԣt9E)vZ%qm6BN]^F*66ɪ)(c3KG嗖]{2,/QOj+)+qŔ1դq-|]E/cy/5~zhX_v~;ְRӮvҪL%KzO=@ړ ydl\DbZ%R9%#`GFݨzU/vnTf7&@ZX Qk*?c̕F`H -=*AR#] t}ֆɔ=zV2eBfboa:+u) ˔^ 8g1Z' +В)CҔԄר>dDy) K>BG}^!*4"J /r%ei QReD|=9UbKN^Jׇo"y)G f}iRj+B²C`-%GAvhZh4܋y^b7uEFjyAJ:i弨B^ e0RZ4 0rTDuu]j#&n#(<'0łы%6N&n)کX{i=ax9ﶘ9ϜzgNmHfNm;1k{ Z<'yq8(KA+"sK2 x-1/l ZOb#p;F7o#ͳLx8iCÿ On37’Xx:CA;\F+@A"A7f42ASv b4^whWHgu bUAXb] fs Ү`v HKƀI ]@6bRY6NOPeUOPeU~ G 1 RuɵVs ]v"11 C&ԖED"yMa sl4pwx)m<0D*foO:㴪A沨^s6sF-#.,xaotԭYd ݗ3<z2fퟕ[]#v*?@&^,Dh&{gTL8] 6llSn.bd>^fZz3O #QU=n|un(1Z PǗp=?j\G?uf NqXr= HH+BZa5#t)\6wVz;WMou*_gy:Aٻ5X)٫.?';Fuхwꃋqk>ހLj"zG3>h4vrϚ ۓ`jnzP>|0C5D*I/[ ʥgW_b#r/ǹL eUYf3Ɛ-2Qmn4oE@V>'[ftr=/d:680d$I%gұsʖN^8̜Vff63Gv:43g!6xW'{ )ziߔa~`pWy򛻨;0-^GE;]_ysS~j/^M,Ǽ:W5:{+ҋ׬<lbmUʜfLcYrM*Fيg像byƈN酛ї蚷P wlJmJQͅvKL Gl vO:vw7ny(uҲ`}nynb~[KށO%ld6e Bzwu` S/0Ӱ= $EJ/584kZq6}Nb>)e􆒁F`6 -Hsgt ex%PID"U,WkZ2m8/`A2IFq'q6vt5IBiG E*[ c32Bg*r![bG'PJLx*yigPLK J2PT SjNFFr)ƏU?9GJVڟnt:@irTe(DuriXdH*YUVE]Fta;133r:O2IΧ]Fa]0pi+c#) `JPJ 9oPjbFD"6k6əwc1j3S5(QQAʸ;?#_I tLgmMkF2Fj$A Ĕr^5W"pvQ$ J9*Sr~d6ߩGYƹU+fi6l<ʳ/J|9'$K'R" 5SX e׀A?B19%Jʭ˅v1b k$Pte\6OGU2ABx$7YQXhz?OL['Rwe]!E^V N5>0eB:僁ۼ ~&bu c5`I +gju2Q)[c, %tBRr+Ftm?ۄn,T5^ޙ.:Z-]poP~"c;Ysb%[Rs$r$=RC TEU*A4h"ܫT%c3Iރ6;I2 ? smٍSnϕ)Dᵨ~~cq@C?uzY?b׷ڮH[ +MF$?i'7sxw{t<؇GL`^BȬSn~gG赶8am2 .T"[ =@p֙ˋ5s%Je_C1;V@JlKb۲12%X] 抐qBDt!ZJl~sRAJxT'ӄBu%b+Eg?ׇS8Q}0 RE  eqKHtN++"-1xwD]߬B:#_ȧ/J()*+ay @djwmęiDtEyt*gmH{@FwU 0η@K@VD:,߯zHICJH U~t I4K5YN̵Lg `!6ϫwƪFAc5%'[:G_.)xZtDW(=~KpX")6#˳IUI1Q"e00xF D"A$5EB0M-JOٟPqiBt@g*W*Ri#k֘[sv;/`_L'n+y{SjhGiⶩ`9_!rTC;UzBmP/ Q= +:TV~9_>ܔzjjj?.CF|Ŏ_M=:iǿ9M..ݸnOaȹ۷u~60T`5T*PTBPY rMOG:j4-}~z/&gYi{1wZQpyr!rS=}2Fg<>T?f\߲'*-r:x -KBe}Ǭ ǠP!:m|𕃊Lh&$at /ɗcͬ ci`nf_|;k)$r !DV_"]t7), =<>nm&kx`?FWLGL]~WMhߎ߯UGEͮHr$qr9)bj4 T@o84LThklh&UzA w4Ux~\ +.tHT= F%]Y䯭o~(|}S7gqCȧja=77(1Q`Cx0 $XiI"C BKAwn4nTR~.媭S]xΛNb}`4&S|%~68ݛ{:;O|90oE1 NUI-A)C罋BF%C-pe(yBdxk+⛱RŪҕ Ld~6X(ݸ!"}FøGEљX02@U}U9BP2YlTe8 :{<)9vP r"Iw 'p7(މHl4Y %Аd1vX'/HО:DRmB;;hb"I?}_tzQNpw.zz:Rߤ 4\.fH̳{NN8;ٱ=E2O~9:37`嗵n0Nav4(LjZ$W~?2@$5hEOuWg ^mWrC™ʓ5Y)2+dVUdA$Fj%/Lq:Zb[?ՈཀѶ1 0cԄ6 \ Kհ\ Kհs:U Vx Z)r4jGVlGܰl-ҷv]o5h߳$R#dI&n4,qa Kh،5I=gGq6p[ȮTvebqruTg ]D]_ْjPOOm{ 6ޱ"}!SxZUdYYHb5f |tNJg%lA'R:fzgPF52R;P?lᮞ~hȀ/2[o?0L~ȨM: % t@I4IJ92GD%X;2ы!#ŊjUhZ) N4~X 汢` VmXD'3yAcJOE\R.Cc tГRX\)+X8ȣہ<4jϾlquxB?EiZ0SF5[.KN)+ 0|h<2)=1z4+)<ޜ?{l%"f 1 ᅮL-52AGPwa4yWka)<(a>ymeik[J)ks7K a0(bLWq4ЏCn<箋>nL*] Jy6%=Oqr5;ԟ=S+V|E>;)WI8)IURy1 e<SJpgad_щ?x7}O ?ֳA'5k $ ;F"k$EUVcgaD& ]Te>6/Vku*үϊ@u,ksK=_p^2̕e(T&cry\4ŃxvywvJZLNk^<:VɲUmr5OV2ЂƘH)fU7 Q6SA;'!|c.cO6ުOcX-7Z=~5Zn9|fԱ|ԱÏ5ul.nѽq-ζq%5ykt?lۥհ.xnЌixbGf2ǚ@K͛0oKzCmntY>|n'[/,g!JYš[Dk@/ {Z0D}k c-`=[C_𙮖v=]YUM˓cL3V7,)EH:ipmQrzUivu iDD$t~5< lTlV*]{- i +t7YߝϤ +u95 mK]d ?C n@kb.Zpfݼh,JFfZ5kF?sWve-}'nΌL9a(/Hyʎpf'4,'q.*8N0qūV5<'rx{8"j35`FV#yB0Y%m*J XcἚ6T{t?tym,HOHQ ӓVwnD{Dg{Wyw5^3҄)4 D z-WFi =w;eջ8WHx` R|g 1QyX)dġBɴƢzdm]Uo,!h KFlk4עrR*&X%ԩR2 SdXh3E뉎)9K1+`nWOebFzm;Pg2(]N@Nde>G<%Z~4d?+fDZ1b3}`+Z[>lyi-s\" QB2& d.6M2 eFXҟ%#%}.ыxJ-JjW} J#j)uMK$E&%]KDkPIIBEGf_dcYUuccX*-2keU#AG'y U G k).xc#h7g4g⣕׌Gd`J T1qQ-w`|cd <[lHW{Va%k}J)Y9Q 1pբF3*@ςP1JcL{SQUMEb(5f z <@2~:?@NtvѪ&:uB>bg5ׁFA&BjV,z c j4ǖ}4H#-q4W0XB:šO2?,!79hwBbUWА'! yfA2 63S qr-ĎɸltW KnٵO/CRiU˅h$tUtB &1@pj/bܖ&>~T$6 uۺqU%HV*r |QKc~+El ?z}.U^|mOoLD|68G8X ".>6|i1ՌZ;Jeiv};HEG]9v"ݣ4ɐ6Lr42ve3T:5D/Ly!+Rro}ދ"op郝"yTQr=0tq>vqP&IcXlE؇m5R+\ Yf{ bcH {>. ӽo6Lܯ .!Agm (ny0 X/nT}R1&J=iͺ2xDYS(GqXD%BUJ+^Wu]S7'0JyWg\5:n(<-Ɇr}X1UӒ[ (@c޳.e;K.|3]3cgG?waN:<2DNatɞwj5y;0#`OXɓFoF\۶K,K?dt; 1䋗1knN.ug@\Lz2fDSeI\nw=dߌŋwXnw=y3O?V!$] :K'B 6ۤU VuKB@ǜCA9`py)LFX]8d¦8A2'8Aҥ$cN ժ;xjPfDt$!8ʠWU}_f Y2Yd[du~3'׏ee)%3L)gJ߄촵M+ A ۼL\ǡã6=-ݜ~g$8ٕ'e6E'ϋ 距w~i3̢xB, y\yo./z]G><7'=Qr"&DvgVd;*b0>98hw:>5xU<{NWǟ>wWo%knxmy!jg 8>՛۟ſm_߼;HOONo7L!Htoy[B fc2}0pq,7ͭtPl~)=sOz3Xj{:82zTtW!z+~(b ;1އt*=K7*-fj|PO/.e<%{L*8 EƚM;ʬ19+:ƪrUD NSo0vo~AHZ3SP@B\ZBiJ,\9 <2^jw㵶 -:骦k47 ~SkcL W/=zAH 7?/rAfWD|1IyQ?-?G1W8~(/^(VG[olW z_^_ѡ}|~eoǭ/U}31Mw{Rm۱3g=uDMG\oueEej/?԰ ĬEkBuYB]A@2x4TU8jE]wceXQLg\YOD,vHasq1Kw-ě Q2>ImmR.* %S$q.u2@I,G2tZe2thr;~ޤSfw wti'гXޭrw$6w{svDoOfӬ `COO͐9eqk<ťؿ5[yk1bE3y2vs/3Üv*D)s9fdq8\ÙA[QHE;K>u7bY5 P5so 3ណ9! %#oFYȡ*#Hg3uN죳KrމS8Jcr 3~I{ǂ=Fs븛pG>3gmI,8! ]ex4Nf `hDL3v ZCm +ĞH4&v!Z|izeFr5]+VTijR!\k,m:TF f'}{Yl jO ݽ^x1X,X~%ĻfTp0{r>c~"mg7gzbLw{Q_b`ቻ2I&u¬R_lKOIr|R_D ht mSQru` dǚQ0]*E%uųEмp~ fKȳ22O[Ԙ9grLCy,L3 oQ?wFKk i s3<@$0{i& VKs0*E<3aC$pϴ1A BdhϚfsrb%:KG2rݝ+}ɪU'Gq* s֫axb+9$um M]+xP5v  h8Uvl臕S4+yQ#Fڦ8ZHq d󢦍9QJ+pQ+mTK^@g[Kyrk9[W<ȰM)#bk xu |-T'u'ɾvӝpƊڭ))Fv[:tٹ[1m |-TZ{2䇠Jyoh5;ڭT6Bpp^,^xyE&YL^[JJb-;AieJyHuWI :@*ap<ƥ]%0oK%^*KZҰj>7s4ylq_.%_dVd0GRrŵG2K(W\j80_X.*ːaڪ==#>g5gӰ!O^ >sWpX+՞97YeGPϠDP׍`IQ4h/cUgNp^Kc#Yh|p7 Tr@9xtqecɪR3D3,X3AB"r<IY(B홫MÐ#Ukz+]XJfYFҎj@7Oׂaq UmQ¡RH%_Kr2^\<Ҏ>u?N81mNbA$scMI7r<-(oejbW+OS h )W=A\W/At ݖ>rڭd6Bp`3nA*A tJV"Hbńj)Ct4Tz9(`PzI<ƥ0.݆q6Kqiyu!2G0fDxq<#"%N YP!MTz~A9u;2a2 9wntMKAKQM-ŎCҠtq=n=>fݑ}iI@-Rk/4f10% *[JƽkSc.UKϥlY8՞TR*6`0(Jq0j !I_TOg+.ƬE-\jn@xZ"E]G#Ydj^|̨wig{yFi9w6oi3(Eaj.GS:J;8{Zhw|%g1V@KX{̻̻[Gqw3R1VA N,u(Fma'.v0c׃b9/uA(ޣ>]vΊ'>S |\%?5 b? ;S|Wo[=8CwjROPK !v̥ؓv=I#M_8=!Ȃ\xjCrJg%V<QJ?EəͅMX >+$lVcKiGZ(z ƯaK}T;,(\<\- [,j"ٱ|%:<ķT ꐯb 꾣 _I4_IݙHMMQd{tatө*[ށǴѭ[ y&Z˦H=έ b0A}GѭU$m'[ y&zM1I(L^ӞE_)GtR'AsJ|~l,qSžܯaw=\l̠1NC4s-I S,/eR!eJ$kpiUXMdj9 `;hi-M7PAQ<[~gbHJ1\bf5)zO+#PMs?9xF8eRAT4Ү^̃cȬui .Q<475IFzkEs^u,r.F$!#+N73 Tʚl{T2{@YVlpV:꘦-MhދMzxEw -RL;jd BɡR*2L# Ô2*PrW9e " mMVS+Z)Ґ9"qm`[RX%9yi!Pl)PD*P\PGt]'JMnR ܄Pt > ""ڠJ&gJr)8yx둈Äq~P^c1N2[#o|yA$嬈#_ g8PQH[ǖ{x_ȻnC3zskmw_]##J2@l$5  "RB%XqeBJQk[g$q.t)Q`e*M(%T!s rj84usH@>P矯F|>O;/W$.RDKeA|=!7r7P^{W\9{ifC"[M.!dWU9^&ljyL S3_*6&ucnH"pkҠ IR* R2wPUSXyy<f{DU\=<&"j͇YH4d [.C6rH" N/ϚHG!f^+BmhC A bqFGйU}ƀT,eF%:/YX6>G_g (C EC;CU}ƀ f?0u,HTzUչ`:ٯ0kx ZU1|Z?Gfqd]a+{  +řTXERIwh'/Cf\5:n(H;6LOUQ=ixakTVX2VE/+.Xzunh*Qh\EZG漄R#-?[.Voۭ~ +*nܫt}y|YG;CwD]uMw5% K4F u`<ԛ7vf3=Wf$a(*^ݲֻ5AzY>lGA#M$yw_4UYҏE[ڇVྏ*:ҔhGډWA'7:VSZGZ J&#fєumei||vÿkiSҦ@J*)E Wb5'r&enJs\ɜ(!ЎT lj6eBv|YWU*Z|S$ kT5OeKATMRcJ`{{鑥{SX76%=ḨrAem^KYj8UhP]$5-ӕRm Ēy"Z0cܜIkR[69tդMVWiXM ؄:LjR[&՗Ԧ&U'a$!5ē^҅M^%4DIۥ#jT9X$Z#jTB4㐷LӘc`JY& 9Zr 3s^FvPjts脠ӊJx9@}])NRH}Zfw R95@b#ىNP:2X`_`99t%츻VqD\$_{F'v?kV<ͪ5mn*wx_ -GGpDaEY%r%W}B?5vX$XEܩA#>EvIIJk}𣸩:gGhPhSQץ.f-x5J\g5<oϳ:d9֦\hKk'((d1^;$Ep{c6;9&f9%9XgsK* -%@czJ`]~rϓ~gК QvL ǼXg7G!9l~`;{ BHCxIN A q;5< P;BM@[pS (j7ccu>Ny%$1Z5&1HR^V.K?<[0 9[-_c;j|ZĭʺhȥT`g1?X#!\ a` ح. 5RdV`ݽSHd0NK_^8iϡB\5~sqMMj`]QljWwo)_邋!jgp0W@,([j^S1 ] t6_~}s?Y)ܫh8gTG#}hVۖ`%lu'%cP%|Wss!OzLoAiO2Ҟ%I{k~וwOd!jlizx4q۸DThcJ5Ev\RfQjDݢ9{X8jBn_31n$FURJik@hKBJrbaŝHWWyc'rf4d4wRB@fƻ)b9Fުw04X],Xwi mSvfP=x"9@59[(wjJK2V+ 3 RTkM-wLa{`.. PyC'h0x:3͌S;,%=Ca&j, >}gJU&!Ԝe>ƟK?@f/S iv\U>ibVsv*a"v9Mnɑq0F6iWq7ɓ2̙ǩˊtι _B j‚ ,-A *+%b\n#Jՙ^+agM%h)|JC@)cKd-;ǰ9Ѹx1WK*6sͨ'O߬%t4w'zsrVKH 0%%B ֔Wg;)8`%vʓBH:,XO J?~mlCS!;&EVY0 BDZ|W]bFvž1KCr?Iqbd>R9\J@$uPb@!j舲/̰Rp L? m cOYX@MgZ6҇J`nqd]&pR3+䒔}>@| )[[D{f?4EIs`%%t$Kd/^E*=$rs:/!ɏ'!^&SίTF#v`]$!Gll AQ܏l&e_뚡=[ .WKRgr[# 0~06}Gl5}X 7>W ̎uDUWW@@VѤ{s{i} ƨ|3&7[IY8\u#KԌ":>9]9pf:VH8q:漭3c$-1b֘Gp1C̈́!|z5~;p\ip 4lV_HV_bm 언-v_N?e-r Z˹l1&l`3?tK 0SpAqT!\6Aǚx&˃K\pU {X8 ) )}ϳfSLFU:5U90%0x5VBϟ|.r';~ϸq8}wYe3 cc3jpLFJxJRj,sV"44%YJu)p MmK7z_uzyJ]Ё/E C>>}ugWo>KfU^_3ެjTxq *_=7*_)_p]n~%8goo@uьI hM;27u/.Q̀.nMnp8;_{;H`k`h]qw?}Fa)+yN=)+/zSR©V\85NJNQnS3teI&? C?ZG7]N"qX`N-rۛL'*]̔ s}0_+|J1|g >n`ްhpee<e͈LN9:Iי+Ey1O˰:1\ Gbd)#`+d(KVtx 0kJRZ\[]ڢ)_ꪞ(sV@[YǢ_Pc~7[M0F; TfUxWg|LQ2.p|x\* ӆw=ۨ]UA*7 .>WnP(K|nP%?'8 ku k|R_<"YE>6 *w9zQlN2SL2 "K%@a&AS*MS2Dge"BK-Z8YvvizZ }('KS>aM}%7&nKw(Ҝg5 j3noHD_8 n,JOLSF!RpGԤķ/uB 4i`4F2Xθ&xvdb9u[8B}1yq:ffwD!!֮C1U\*N[4p,.;~G^BDuT Ӕ~.Aʺ| dEP;Ӫ  q [ Ar !ӰP"ؔx!hj-l^Ud 7TJ4H3%i{$&sb'!sݜvVLx!t?P:ܻtw>ўxhx;#b|OM@!O"!q{sڽڽKtw#*~ǷCGSQOveAvr,9i4xCe7 _`[w[e" 8+""mZnL:{4S?ObUV9~4 ǫshx&)')f0$2ڌ*$̡4EZeJsӆavúZp/nyNs@i߾ B"=jJ]qW_oNqrB!|G"Ty9:QLIЩԩL 9c9q3daQ,M})z0!(fW6Jm NLu34@)J] `_X\N4Z@%;%i|qbO**4޳fH(LupgMlY;_Z{jtCf)pk53+XYxd:1^E? {M n}[lyP`x4[yUA͚IQJ|6?TWέQ/s2Mgof܋Wž]0_rfw!FC^DR׽n @PQ1"))nECh W T:טdxR -uT't&v8d**6Z!/\EtJi9[(NMtQ S-кv y*Sǹns@PQ1L8Z!/\ECtJQ|g!gʃ::Ft;G숾[0uUtEhMLg&'Tr////Ϫ3nЎ"BWiqn,I=ޱxE}\>I .NV%zM9lqng"D9*Ѽ&krA ʥ@└N dfjZ0[v~v4=gYy)ncU2ԭo+V<6݁<Ɵ9{ -MEyL 1*@W4V`ZeIaLz]e2+# ?L?J{񕉭~¢3oBQrF󗚒 ZQRӿaC'|rC'&2L9x25ǂZGY9ծoE2Э`#^YO:ެŐz(X'v3m&J E(2)(>1EYc um|<:'eڙ{G&?+D3!,2c5$If#͸7%rĚķla-,shYF AθƆ *g jLci 8X7Bzur!rWrޭ$l+Q9xgcH.!r -˰KM2-LS!Hr?bz;yG0N`֚E0 ܩjD$Y8U5DAX/NUH. j83aDD7S1˚k2RRYF&bFq`ՓT&P)|>UK,ه}4HQئru&%ʸ0+m?'I 78}ݩB0dmf&SS$x RsDMzj)` ZAHk_F;,jU-o|WJL! 1XBU o|ycFK:"a%xI2D1bf"0-%U\*n4h )3JeZ5 O8 'DvrqW8אŤ$36xϚsILuAK(E '6efVS*!f$acbN[в6zFHM# i`hVj 1+ZfbQk 4}Zl"GFa꾽 Wbߗp_]n 1uQx3r*9 OFJTe M&M= |%[#%RwYXggS'Y"!? 2E(r;5kP\ZJ2~~P܄m+b]]ҟ?7@&|uF6CZ4Did{֟n='x, nxd8uL&, /1%M&Q>}Y>}طJ-w7t8ton0 JjuSݥ'v=f/fq:^dMwzՌkeOfPD[bjv4N1[dl'@7n؟Yw cfoYh-:uA[8F\S` AD׼+yGSd`.ܾMK^ݏ HjM %O)4˥Vx珫R<ᱤcӵG\ԤGxWQ.b/$̯ۡwc⣐i)㪛 )3=?UNְxLOI]J?!j%VR2d:Q&Q$XKT}B,F3B a_wAEI2E@nI^*Tup7 S\ v0ɉS|;W40&CZ^3^C1^(oEJvdO5uUZcՔP"e:~ݭ,mOēpw!Ss[Wy X@^ȴ9Rv5pӏ}/ٳ/'%X.˅T%(_BP `CX,>8HNNB}lNn nqu "ALpgdYG+0NދtMn_5mwm#I_!ewG`y̗AX=l'UEIDɦff&)~]Uhhj^v@͌f~CfG8pEi7b`PsɸuHkJPL+ Ҳ$ٟjKfBڠa\ fo/(|^Z!a~bP b<߶Q<撹P+θ*="\CXܲaBm6_0sK|ʭ0# h=)ZSMpL!!x0XYe:6N4椡vN9ф-UJ` !|=UܪvūͩZ ޲;w2g:H9p!9N Ԋ8mga2u}yx18SeJ`Efo)I wGQDw@>#dm;O _PNw<N_m@+㶧Wc@\>$.IeHKSbʲJkRƄjEǧ+=ĘFEl ۡ *4⽎d]Z~,py?ne9$u_]1hT2" J8={8k`Hm>E5 HN(%7<buT'y:|lE=5 (8ay`q˼I}M0" c,[! RNg Ew+h /ߋu@(/_K-O"SIɲ? #߳dA6Lq@rD.u$]tM5ҎORCծ,H!xԯAZT:.aY"{jF Gjri5$u8Gsjb(78ճpoQQ7utJW#Kd{ZEbP@8ӧfx]XiplHBT&vm&m#K0ʄ35%[݅R^n (tޑ(8ǹ&nLm5a $VoEIAQ.{VV,ӌIrv4_j{2{;F"+"=l,_\2)'?O;sއZU:gϷNP{=I|M+b6&=,T!ɭ̼ᅻ`/x ͧLMTTSh8Is@2|6bte`OQ^gЕؔ:mFdw9oR ,lwV浨A?aAZ@aBV*Jt醀ht=o.ՙ]QI)h*IӮz4qd]ZuWoR*hpy Oeuڵ.RGu/L `龴ou("CPTs8Bm;YB) kv9XmD!\rC)W9VG / ;)SBkju~?!y''i4r|~q0AFӑ5)0a,#! _ s$M.u&(4'reXz2y#%W%~X2N))eԙ 'A($W3l@B=@HT'^,6nˤU7@Œke!k:R}h_SqOVLKZ:Wa.6dŠt9\Im(XYqXsjA#ܪl:]'&5H~ZD|hk&YuN9pAGhmxM?p(qnGr.ҹw 0[D\L"$ILabqlsx]9<_;ƒ/xbJ_G֓ 1on.?3g&rd|f>S~xDۆdioDcK0^q,x& 0|a1gNg:MN]8.Oy)ҡkf"ex kNOLL5L?`OJvĪѩ˥%Jwn*z,[]dAKJ#JN9W:Rô0*Fod2Jgᜟ3uaUN{3`)#T8՚[xT؃"YǼ;! B8!V2PC=`͎F-w߼.\m/xz0@2 y,4aH5L Yp}-F&vnvʿ\p#Te7pnSIFeS4!N '5H1KҁlV=-6fj# M'S5VI-ϯ2z>Zwtø"C $˝t8헭)sd?>VNTq%a 3ɽ MK4&17(8UMSԃbLsBL吅C8)*HRGRSRrEtVybQ %/7$Jېv _JӑJ󶫺Eg9"`HimaD jQ-BE;PITZ J6eW SM*RML& P΅xĄv1oQP]̥Sr PMix`A)Sljrq (r]8 >ϣɣ !Å7eO6\+m&birMjQ҂ q+:P J9iEaTxCKBsJ+iFY:irr 1N v^I|,WAăO6K,J97\pe@8!M-ik갞@{HB'M&jq%z\3i7[4kx-h7&LmP2 ׳CQ9ɒ*OyWTu3sn%єXTumYAp]u )Wz*c=&`s`I[P H)EROԲsz\%H@LjV!eKufvPھ۷gN5:M[Z"H](-_n@UN^}Mqe;t_mWlό9lS$I19qZc 8RI&4D:P?kKE & h_Lt{$J0&șԪnQUFK%J4Zzi.$Č:Jઑ|Z?J4lmo,&nz`,poie|k)Gq/ơANӬ_ ?nsٷ͠ 5eyxvo0Lj;׆ՈJipTprņX iUkr8Q_ZBsg֫$9ͼ h0{IʚQaxevD=Ff5jIy,p9*Mz\uV2)_aJ(^mڂS~A0`xGPx8'35֜f*"h1[jvŴ NieI/,gf9ϐ ^ 4grƂMI.&.M)Yim&w/q6BK2V&iV`aSv);N~4߿$o0';4WoD@$_S<;R"Vz'jpƭ .H2 +q0YfKgʵyT]6T_>nxu~?p7]o4Ì7HVJq?M.Zx>a<;ݰ'={򢦔E{9c޷v*>v_فX ~E:{Ju3~}{̤trti^ ,ƌl9ä\PZA<(Vu{D`ƈm|1dNp+ ЪB*tB:RYc[áTC5/Yk𩐚h;`f"{\^w #}yDǬI`ڙ4QJH%H9_~3)mnB!K?r*FCP*i tbSc<=o MDŽ\LD?;Åv"pDmKMY!`v-WXڷ0:L,ӑ1,{8U"<ٲ3rL$ fCٵ6z'kt?[o4AV{aCdۉG-$8*l-É:N'C\ힹimwvHLQP*q]&L TqS7iRn{ }~?ŦAcֱqۥqz{j^6?oVm Hs*U!&_F`¹D^}ph4K6AE^L̤Ѧ=tlH)SI}: OC-}cX(|yc9X)|؏r=Nl8s5*z)k|w|lY~is)Ep 4=6~\6~jn!B?FW`n!v4ʾ-LPG(B|81,!JXHGrapLrBQd*˺_2 XP0DCcxu!x!}h$qb0p#gT$:0!PA@8V!)$8L ]GPHj\X!x8XÔHɜMNˡ1BrUr_ Du޽~N}Z*;" {Ң%m0נ4f= `8r9%8Y]fljv/)%̺\˸%4n1IcTA߸%!\#znS6ąm!; { 墀/;Fpdz˼Y##d}kW#i5+P 祩#r'~7oW|gN&m@sJtnUVq^jNpfgEg34ہ@/{WƑ /}7#},MFdl tX\SG(*7g(rx98,ꩮ>꩸* ڙwV5\zUPwɩO縉6q{.5e,\ƀUN.)]StJx8J֕: neq~7N~w{2kuYlrY탽!~VtECV9aQv&-J++RNGUV>g3aʆ >owl=l$aM6 /Ty>6z 5 Х!Wb|lhI\x5TǽyIPR!#TRJ9:CGw!i9 zR)g3&lZKtdbTZCJY\!wdi#oӽJˈ2zRhtsnBH ݓSwX˩!*Ģݠ='y'-|yFҢ@jLrlD+ m-\q8~u2#ūqXg& %;R3KTH\9bD8CF,8#dQ`$(m_Y->vѤלɣ cLT2EԼߛ4N4le_\ZQעRψI T1 /(~&xoYp)Ú)UQvy u{kFMs4\mXtݜkzw3瑏w 򋜌kzo"ѿfW?X{QJ?u)Rؿ}UXIHѴArw_ 'rar`\,\b=2(X=NiQL<9<Jd+rK(sXnMlj4l:k`Z tzCQJ>И۴:A Tb|9a!_6)tc+G?æR BfJh!r<(QfwQT.RJJڻлZ mu}w ŧ3*4q;{q#N]aFowPpEʿ | w~tloL=9Ӊ{@ʝy OO(TWtH|P&| iˢ}LJACplmLnm"JpF!q1)lk3PjvkGXaNPV@'X`z:Ee@$!rN:2 $ZF洅4BscOϏpI.XTއG̤T*r%jI~[F( -p27J\(DxبGw`sG_Ofv}y73}1/M:9bR,|sǼ E$(N/晳9'Y`Yi^#w/! "U}S.o@ƾ5l\~Wl,Y-c5%9Gp;rO͔RI"&r= zzMaX(Y&TZ)u9/yF:bY ^VS )Bg !+Gl5G,Vbt4ip0D0W/3Q3ҥRd0a1_=7[%)Atʤ\qJ ɨ 058L&:><~fSc?|Qޤa d/̟b]ٚǀzݛw_⼿c$&_=}~Vz!KG?!3HI!F?l9Do'˳.ONy}3/)xC-@Ḓ.=P`!裙7|R;b! &s Vɻ46rAaVaz_130H-&!1UPQJJFZ[2uؾj)*2ЏøSZsM)U/)lv1w#Xk<-88n4߈q6ZF73t(`vퟔ-p&G-99 OӇFį.% 榮*~@_ bNlbS"1/@ЃF+L~ӊz;[hLJ@ΟO]we%s]:\36 @i/Ϟ*T4.v%j TOYXK^}}8Pd94?*M\ͽDQLAU>,eŢ2uZࢲDϼby8US#Ԛf'Lk d P _tw.r:&+dl7,VM0ch;fTnz 3z+ ?f[%V{q¿eDgƮP~~?Pn[0.FgyLt>3_݅e\oKM⤝ _iVU=z*д޽&Y&w-l&hT.@[Z4w:ZJ)ܚδg~ gHH`BSJ\SpqN o]g̵ƝLN vN)e}~lbrj|29Q7ZbP;^ru4VCׯq:Ny.6Wb@kw3 pl^ _jrYEæ{%1,/‹0Mf47|,W \D.*˷?`Y70%,ߖF~*#E؂\! $lW7FS oצ7HSX)P? +7rk i!cՙ:ƹ0HE rҡ{ɯR鮚lM_.UKIO$,'V4y7WwogG8ěǙmf \qq׸z>vr磷}5*o_l&d#ʅLJX ƃcуC@Hnb֏+Q Th9l򵍍C^՞7X"}ྫJV֟ìekXWTQލ\h]D5E >RG%k$NP"4gƓXԕB$շ@bЅ@c9(Eín#Z}?_[tm?Q ֢>h(v JJ ;8:WvS`zM^:UQʮwpԩVv#NAjes3=vY ޝ:-Ƹ\Ӻ,-p1 Hڹi{ B8pã[6rҸ dlNt+?HO|*XU櫲5_`1f'Χ3ZQ(#IOPܧ^t ͻ u8uMU#q8  VF06s5Rk"̓aޫ<1/NTf *^@`Lrˑ:r 3Q$Di2np*+HZU>z2Z1ґ]qRGIGDK,t 咧]^G5suhnr9 UR1S44!IW aS%jqոN1DFED9Gn(h5Wj^ @5 V~|#Ec)RZ lM7l .Ps"Q91~v? sy4Ҡa5m )wRf0MD>IJ|d *Lo0/Ml)fT:3bnF.Z)sb!ۄpk3Yf=/0j %'20va(\"mSRz&J[F QpNQUq0RQa*΍M! OϷs숲!:>>.Ն1!`x%oPfwi.>F{4qåz(>ZC殜‹%f lZ`u)l+bxI\˸{KBD.IQR]cP\bA-xwZ2bbWiqZDo>\#Qjhc*gfff&w˔vֲMo}ЖPsS}pQ:>H=Vɐ|bΊw]rΊp+V*CK#Y=ϹHEwE{7hH(QH9sށ\K$URѽva{+TCBǽܕ(=ʞl?crdLfLjirBxe^`%HiϸXi6*hVzaLҦjfФTbTJveLbl VܥҸ#wLʐݮ!X6!GC5t"SRxD4k<()t9߫62Jqgs/%~. [2Aͷ<Œ<œTsk/Cˆv7g[#[_"@u[\De-yڊtYwvqYO1]*wFG&*-kIwT[2g=^(viϯjdq-k Xh\,SjK0@#J!r-5!t,,Ab_Ĵ:tճe4rH!,dCHaFXOf!xm9^+r~U͵I)BZh+ܭ'ND]> ۼQ5[+ ΦU뻮h!Z#K6vo ZE} eaHлj2^!9=_NK6[D $C$fcM$ kRF~ng\)6 ti|8 Ee *1#PU° *HP8z2 ͻsLf;˧;޼>SEΦE$YRt>(E%z`5A~;ũMFib/}D=޸dv_gw};4v1LTNCK Ɣ'cxylCeާ0yg.F,)^rO~R.'+6!΢E<.p-5G7$PsPJ" BpxkCY [|$GL0ZKpA!F82"A;Ib aj@sQvOwnnF4<"¦Qoz:9uK䬹Ak1`yH9RRX%qU#F` ăx?Z Zꂌ`RP]̬`YJD4 Es"`tb`BY]`oHHe}X۵lџR֥kԉ +.ڎʌ4!΢E<-5h%%GV)]tѺU-͕݊nMpȁhOQ>xa]J]Oҭ9SF֧N+ݚg"{n}$5}MN$ >@&,L#g?}Y!i vAٴ'Oﴽ@Lo*ꯜpߒV{DȾ=DEqiv|TmZD䕄zz"Q0ϥ ,7S[^@i୎ cţKY%`YQuL德.AVe{M+#nCEKx35\{kϸT BAKA"S""`ldk[hI2]kv(e* u4L✲ 6jaY4,xIk'Vi/͕9 S ԏfzx/fWP~H%iєj6Msqlopd$l>bO$J)ԝ*LCٜHRzH140/j5CԐ7候) 1ZYi .75Aq}EBpKz3ZcpHTܚ/?O+ʶ5aZWkۛ8 mZ T+k|m JqcGQOS;3u;eMpȁhO1:xڮ%Fg! רٛ#hsNlCEKxJ12xrMaNQS)]t됦[\83LY?h]KmLD {FH85O-$y+9+x*r ERU{_덡@iQ*ضX] !8<<B9R5OPT_y{L8CxdM "8 {Q+2۟u 4Al)ّ8Y-u&*u6}n8RkԲz"ڴnm5!΢{SV2mUlVVT6Y$bgljn)YK-V*/Ǽ2΢Wr)7 LUKbb̽WCv祬=~8N8NjFy_`#(A%辈8Eȩ+gA29CKZZ(-PZVYRE:Lmop%lvޞ$+fkL@f A?4hsaޜN<\?NS~[J M~.k8ϟ^*'fW_U <#28M9z_l:=?N&`/,T31I+߹^!o {[RG1}_)`1LDdSw.Ѹ M'&h4Z/@2Ih$ܞVQIQH!x{D9`GAh`jD !@E]s"*{t̀#PP6Ӭ@5,tkQ0j:٧j4xuUw:QjQ5 5b@I4BT-TD@,Y #x*T=(*5T.$AsJ!Qyp7'& @oPwX!U';Rl6肰qŤm|?O'?l_+bgg\ޓxܨrs;#4}_M@O^?R"z_atj|4[^~@ol>`0>v1Ʌ"Ow?,^ibQ#MȴI Ȟ,<"wnT 5Ld<ܩf0HQ%s )4F=OeC^Vn 뀀4` 9OL W8sAt_cfq G{缡x g^ˡQhBD  اu;T˴Cw){ 4bRV;p)D2A^Ҝ <"$U缷2fnTS]_ LccB㪳bNj ?pP_J)D DY %'sݵoQ `VBǽKER1* oP9d5d`0bpS1w1*Xfqp6*B=5Vx ARJpXUhÜ[: rI1"5E)Č  )>Z͜CƘncFBΓX".J=GEN*EpDI"#1 b:)LB) 'BkYC9,^?8^{mv%82l̘tI,BFJd'f9|~u~=13S_b/b1^+:xtCyq?L/N&٢h]>G,2Iȕ쎄6ytUfӗDz)Jlre+O%o6|vT˝l0hđuM_dG-!4'JeIhV1/QhAD9i9&ƨ3t5Dc= v6>fôn:hmK ?ZHJqT:-IJn(GDypDRY&A+!x5 g%h vDc&<82e 2ZQtErVY}MUDgӳs/$eB.tehCM&6#Wڇ.L]جR@@k* k(8`l|0ѡ9\MЭ5>bp<.@1B{&%h!4髜 dP>Z'uH"EoGƔ(GV(/S qN9}-,9}12B1/ ',LcN%`ښ"1#n(XA2dZ(e 圵|aqjф8~ri5`Ͼ,Ghĩ.VD!rf KIf$5Yn^euv\q "BC6&Ft"8\gL5kq|P0']M+;jz@WN $@ԜR-09f6I%$L[l'לo\JCqh@{/0 8o*WrA/ $p^&뻏c{;I1|ˎ;Mq);4eǝVV/pӋ1&H(O11(U$E!T*};4'RchLq5tz5Jt7s3JfvƼj/Ce/х/^|oY(+M0i 30i 3VÌUn8Af!'iz/9qV-„ \67.Tͥ͊p~8MiNpZPi%).2>ziN8b4zwmX~Yy4`ɦ$l+%E;]aIJrb.n4EU|CDRkRqIsӗIOf> "IzE ʱ;NTJ؆H,3Zc gvRkA 8fB&awl+16PR"1|T =k.vL9c^0N=s!#*I R:"Mi59Ƃ9+ǰw\2SX;r9?1Vs "sD0F$q;w&$"p0oHڎIґ>63)s~ls"1ٲ!ŕ/vă5{kȄxK%~c*I7Lp! CNbSz@Ȫ'9bHpD`SLIXMDДpZ[6V $2D3 R JsQk$` -DMo=P~P)q $cBMIl!)=pjZ4(JEZ)uoJj֟.w@q_IJ rε!moA@UڐHEiĬf\mC܏ܭ3a4P۩gZW'&$CF~~5Gn<ƻl1ݻktۣk`͗4$nИef]pQjR4.wɻpZ&7~pxd.SٯK$W&=[^F(xō}>*0]QBѱ?5E:odh@Y˜bԞ+3cmRG.#ɎޮKUvm&G1Zd&:8)m W e(XV; ՁRUY8'6 `߉%oئz챸l^Ĺgr1)Glm u>(#5o=6IDnƞC@Pypr^4.N*5 N$ Q3g҉]br9~!/"I6(LodfTēLmϱW2i R&˃#$Ϭg>7uޔʃoW7lZ\b<$`={R.<7Zg3ΌZ y"cA E'5-sSq1 eu!`m6ݩ+g;%I =T?+w/PˉERIde0AciB$A8%;( 9 8ђ<1q$yj`OP&z;VSɖPZD6&*gcg?A8y p=ab7]`_p[BJ?;K6w0r/dA7'ew} ԝp b2Ay I0W.U vo_2ؾbb]U#5q>K\ z<.u{?O\0ZȂKk'oO3m ^̻JjmbjlெO&/ypqi4Y}p諯Ga04ưJkswգ{gc4Q&Q8#6 roX<{s{e)Ek doSy1m)ey!xܺA3gPGBƶtAU8G5xq\j2 φAQ, ٠l;c+ H`$Id$"8":__t2L"8w`~fhs2C80¬1=;P)A),9,=(H:w E9z3V%$?øp QJ: 2> UZaRp=pY݊{履.<}L». DsEȁl |?.g*X/v*t1Xb9΅$8 %Pw8g8%xyS^^fl}'vFUi%|U׷ZsoW߬dٶVL1$9&oFiC9J36s5鵐-C{uʝAyݝ /c!V<)Z{bb7 a6gwϑA۪y8n\]6no^5wRz&+-k=z:jw zdUvgZcBgIҿ, ^21u<޻&P$t/ٸ-4 D#~{00Mύ}_'[Gkm܃wgF a\ f4~a9{X\/*-wט!5zzW@xtd cLJEܴ/\6 _n,i^L|,ȋt'~*zPnh[\ s4+4ѹcuZfL{YD9x^O''_<k\2/Db/QXjR+$͵yXN3/%?~"%-nl,I=ܮp~̳4/dNˌ,ǬiqqHtY҂KʘR男[3,͘EffqBr4 ;B`r{0~im7|e{b6u]Ա!&2Ɲ(Z/#8Hl8EY"5`xŨ"(9g[W/ށ;bKȗ`lpB*0q+KL# n=4BxJ̇;G]'U6?pOaɉRQR'9* DBd3X@#p)3n}k qLMf~:ţUTՎ_/~ܮsH#Z^i g(\ % QPTzMWyI@thh[pD×Ʋff! I;U[[:Tŭ\D&Zz8e3`8v4Ӥ:I20o~Ab#7=da()ZexC$/ }x4?m rXq؝F7[m]6zxׁэ\}$yR`=0Y=s7sD+~jj^vtLnFS1 XK+h>wvU>czVVY#BMrb'Leo4k>LƷfvvPIĶrvu]ˀu;ozv~/!`#w3MZx!<$.,s誔f2Xt:]-GQQqn57mXBRw,V;1G>{%j}ф!l%+䣇`$Q4zGe2.|./-|W/]59Aj#bKэcG5H>}eMDZ6eϟɂ'd@G(x>{^H5Opa{6|ݕ7lv $8O#Vk4dṾzj4-Z{"lK rK(d^;h`.ͭ3KDs2> 76*/4Ȃn6[T+ƅzCr&_n uk vzMNCrTbCA.Z㛇oZ#a mIrTqbUBTwɴX?*$*8#}**R ̖1)\Őדkv`bpG-kI rXka3,^=4bR=M`Z?/~ qW5+f5[z} C>,>Dsx[/?.gֺ W-OQܑ  XoDq#TzW5xYvI\ $#ڍ9p)q"Xwߐi1/KS$'C{֪A#F08E>4A=Q+ǩȳdD!ԡ\jDɎ ;NR/a*̣95ȣ9EV8pfiY'UK$y;^^{߼j#vvq urVMskQ$8;f9*ͪq,yиx>kCs9)2IPARh#扞$?xaFl+tl0º>vTY;Ykڙ%|0/ǟlDz0Ȅ]-}O)2"t&i]l6)ip(cV+@c.kJ<G.) c;: !nC:-]rS΢0NF7‘T,`zpğ;%[ IQq}BЏ )`=9P'_Rr2C>ZI2=Cs*D¸vJ~"y=l8MƤV(T1`f@V~ :f̛.e$&ie1#)\LjMI̘؎\tbb=-)A/+?'y ]p#^4^nȨ)~C2$˽7~fi:|ёnkٵ\RS츈zi-I y7%:U6"Kq~49n7^BKe|3 "?X`Kp"U[sJDʩݟEn幗m7] ! &.'IE[FRr%)2-biR b4Ă$rl,(I(h9+ HHqJ t"KHի}DuKճJIJ RCnX=V0PU;a48(KDR`h` 8 0|4R)60k, %2M`xj$C,F`0[Gղ/_/UbEqU" V/V?9o!s,L8q%1 Ziǔ T1P7JEF#wA(57J LOgT6e q2} q=CZ1SHG$8!1A wȂo:VwJh:8 9HNtV.$s1 Z&q s$B^AME"QB E 7aRQR't:07}!)g P# &P;1#o+$( j%#$wD+rw~25HM=ƼA[%[$@Sr ~SMmz~qhf|AzNp}>Շؓzn~{@ͦ[^BkKݏטs{{]8^kW8u?~ ._;e';csKƮhc0Kyh50{+$C2Z Y"* ,A"u욜yz#?[{n l}C#g z` v {^c}Zټ ֫owP %:_P %>_PS 5Q 4E2C4*J00KJTؙTwOYS<[P %?_P3 zOP )vߐӺoݏb~{?sͥ.=MNgBeR[7z;LQݟo~D)ǵ(@2ltDL"9X#y{OmL t^iq?ۏŴz%>MB 9ݹYtś]|_IӅskqsr|-5㲞ypەVRX6̗/?NxixAoߟ=_dVL/챒8a呐3$eҺA2떊AIm֣O+ ̺ihrHșhLQ=xEn0<L)~rq:gLqN焬QKMHLR_ʳbਭInloZ3\[hn^wv֝fU-f>{W2*Ųtwk9(;0__U˯WUz ND^ aD#ƈ T,^{鬯J<`x߫w*۲F?){jC/֑ClFW Vղ[4FGt6r^EK[ܾ׌KWv/uk^rݲ4ivi^kp"Z'ot50Ǵue^s"(nהBAW;cV+NUw+F9$E4E+mX='CC`7y>0z1EpzHEv="9KWKWuUWSN>Vٴn4-$cx)"{Tck]uk]E4KG֍ڑGnN;vn[<`-r?Ѻ5!!\DdJ_ڴnu Etr߱u;ފen{s[E4K:UMN#2 Etr߱u;^f#iDք[a8'b":ź/ <%p[Dքi݄< -T;vʙRx**hwЧLg.Y2EOfúr1HwbݎXQY֭ y"#S۴n\D[.)]1+/ 'Ѻ5!!\Dds)϶3]$u=Q3V Uj=fELVeȔ_r%>zG*~f{paԂ*mJ 4;}ݔkx`[V54 &s 6ި&}-B6[fKo#fB6֠& :̚N6֨&p.&ҲͬF5Aqv5!j3kmfIMtj#fH6֨&J.&ͬ&5H-.&ͬ&5}뎎Y\f.2o#fք8d62k SN BRfZ =Y嘙5xgͬ5 2k0f֞Ef Q_fMJfZ f֤֬ͬ&5A!/׎ڳȬ)p̚6֨&F.&ͬ5Abo3k0crfZ@<=kOfZsh\x ͻ&L*Q^ 3Ʒ﫟A; ;2* a -`fc%AT90e_ b x" w1Q LP!&SJ0HY,r_&> k)Xzd爵\??Տ/.?+[Dʒ;R81B+nE v^hbRK ;UdžZc=F@!MA֍?}~r ` &D0xy~nA08^مc8y/㰈;; 5V aŧOdʻ[ܾ3۽2S9ZI::`d)-##sI 4h!Y/> p؝`xdZcAuo<0^N[*` U#B U. a{ aFSPE`K0uX#xx D%H Ԉp(h~-4XSss ׀KPt >L z5 @ .-.={f2Ӝ{F  p}නEX?V=Ҕj>{K3`L@zHEvRٺ0Ps&P Fbڔ&?ye Λ lh.*"C%H\B,QB%xkphI,h@dJ13gBSň׌,aJWPӥT)i: w;𝤆S#X<ל uVg C`Lpt.?Pr,6DRX%KAAK BS|iBbM7 dB9 6G?( b`3ހ;'CA@"q9DE"J sX 6&` ,L Ybvg*mYD*BC\Id)XB ^ rFA }_~!/QDT L.*BS#% h.xS?6\vyfQSYS`Mhw ?z-F:z "cGX,m+JOd] JscRbbfuTB J@qaV^*mٻJ+i! 4!qtqc{VT ohX|4:C5.W)W/@@b>̵[seSCg4Φ´~:z;p`1fЙUm?0lSpf#2dIQm=2Ȼ;"6z ;6AC%,sV{+Ӳ`:œ( ƲPD` `:EtU^D^0x8FDLG+u*V]Su'ڰ >:ƝΆs*>7  [Yqn\/d X.PUlT4I 2+A3 q,%DV=0rg,| RC5iMp_.v3(o;AT5Zy3V7rMU2ʭ\/[3kC^ߖ<+RKTI؃lR 7/_q#U;oڪ L^ת P'fN] ZRBKv*pi5Ki+)Ȟ4LI켶N8+-%;C0ʊ) !%g( ZbI}u=̙x(Fe]b4;0_IjrcsafQ&;b" =? @Fh߼0`NlT;?:3vbu9*u~bjB u>uFð~Td^7;wCUEngq,Ĵ #'s] Ou2v/U bzn<}"BK~@"0x<M<;4}+B41TW^? !ktώaO"V3Ywpt/E[pi$^կwnoK΂ q_? ~l.NHR$B Jl).x7F~M/*#u ǴWFVly$cK3]38xa30$#>ŵ< gGwpq!t5'^lRd2)A:hxpi84V* +3N{w q"0V"Iđ8\>H1#o 6PNqUJĐ )+Kifk7NEv[ "= \&(%垞*n1r-,ew0no>Kzq)WB4%jq2&"= -dH()^4Qh3j+>9ˈIsۗw uVluqTVf ^}3uެ|T|qÆp 3 $,SJ /AeEXc6wqOr] !X(wxtx)d"C',);ʠJl)b5iA5%z,Z;o[4&7!DMf_{ [7Ûy0*N??HbwUݵ*r9\&s& \{)C8}4hCz:H_Hw0ޡuHYA[0ѧde6 ;_7gRK&Ts\.p{Wu8vO_W 8?69sq0{T(ER*&NC&y~>k$UDԺm5-f&y#;W4 Wsbmm  Aܠh QGC_; "HlZʎ;\ %,iJWkRN6@(<=b'"8lJIJFbf\X 3& `Jq ܷCt6F͟z@=K uY+I^do㸛vY(b`{fX)r51zҲ__GP۠`Tl *i- J) mEA!M0h"jF >yCr}BZ/ںѸbu[Fn:΢~Q7L N)|Ǘ}sú))X\ P|.Q~G.|4 !\D)E5X8{'6Db r9RKbr}~"QVH&]nt@sUsps| 5Ǎo? /F:a&)wRC\#M餀3,>+-3R HL$^(S">c`f%\Yj/[¸Ԙ+ǵG%R|Fd$K~Ye(}4 q 5 5's ^v~~7KGEP.=Mﶒ1x5f2_oq64#!ZkxrtH`P3n%eQW;J jAOA,$>&!  4\b >fH7Gh=ܱI?'RYo(ujVv|(l0uֱĹf ?m *<>"`F0 6&L?nE.b]˷6;9 p`a=es!=^"ҥ/&w_eJqHZ|8.jL)>0'}=µ&tk.Sk!9(͘}㯎/w=L%RzFy,$S5e1gt `+c1Aq漲$Thl9%lֈbl pM7G|[R_0UJ?`ErؠcC%Ή{h֚pCNۊp=P^ eQo:8PUDA[m&vT1x|p @_[?ҁ>pD8(GGc-ԛYz'|;S:K!>ܛ84, P-XRsFP(}&,B&rHlԄgXR^$zs";x>^^k"ۄq#[Ġx(jshb4É HlFч:@izV@5VrDgir5g=ȰJm[cmfAICw**4JEgfoG<L ՝OvFΊ<*6xa΀U9|Lop#JLrP伓6y!( ?j}IFsaSs<|R~aV~ T*-vve?mǒߴ?SGxҟgppߔlvQ#__GeXeXeXeXtzfJd!Vd#mNȬU9w1d@oeVE!X~ ԗ_}S/ίU>k_ye§UW jhMOrN1Gprvh6(#L˭R4U%<^CetL٭rʖ\͑t\mpjCmp_f\8Gx h6=+,Z룋` t]U)%,$CZ~\ɦciv351hޜv,a<,wAL~"Oɍ㘂*U'\ Dd #-^r#X`}3VYo\\)A" n9sdhSNgE$>-?u'l.M) hIߨ:տH0bU3nfęfFcY\aLm^U1}&SRBYK-v[ %JpB I-HE'?{DR8Q鋙(>>g+ւ>'ijgK {RY6xSG=1@64IR h4[111(6%K=F!OHܢ:̨ܾ9 jGw{F$_D=E樟>M7eo<ܭo"D[?36E땑Y\_ZŏW+tYdQD>QΜOx,ʳٖ1+Usl^3,Tp;Zd rWߒ?y+eP896pwHe+hܟD#QST~~GyNiHOrŤ {rөfoPCBp6$b?(0#>Rp/i/ۥ! Ah S3ˍ)V `i]zt/ O= d`U8l8PbK p*V9(`O@E)9K8ylRi9u"zi P g٘d}Қu6_k&W{@}=r9TL~ F6FhH)ء˩OB9PJ'n,P|I9STv:˭Sжwͤ>zէhQv[ "%"ˋ?QbT@c`>__UONc2o)'@ǭ^~ѱ,$>״:|#mAQ/^O4 Vnf1x=pFʹz&f[jxqO5tO@Q:|i׌)5;x`AMh3b0^HB!OЊ9a@&˄Xm&-3L@ U^I.d&2:P``2 z5=&mh>z U e5sQ:sQ:ݪήreQI`yF(ɧSex !2}`SA\5ih{ : 9&Y~ 5@JB> (6Yr#,) dB^, LfJq8NWU~To<-,v55\6+"1"&4Yx ،T3ǐ[ưZ#5L+U.הpuecrn5'5Le0VND$eMrHr=q2M$ܜ(DTXmVѝkU^luKSK37 G 3?rlSYc#rS"1/DJ}Xo&RjsadZ9ДxrGI/ҥ ]X[G0;Lw}i䤹WȼYHkiAykPg!j}DB>e(ߞ {F\7Lg'v$^"Z~ 2Šhi,YYΉc1l)iT:G5t\G5jk+ @/"_@2kN)q90hǓNu)t Ht7)hmpNPi1~/'R/0Lɝ쏏.DdƒP0=sF1̭{ Z3#(6.z7T緺Gsٖ1B;:q2 -, "8}/n8|)3{KšI/ӁY 3՘~Ζ AH߮6 ۏw~?pR<CIܒe] ~8SQ*Jr^z ĕ' #|l$QLJgB O rXtƫlcxi9g=!>6@\(Z+K\BBJcIͥZ `gQpE5 AR-{`Y=,%*q%zyyK).g@!(tɶǑs^́, |=?>ev?~xh?|2Θu۸b&v,_= ˋ6N soOɹѴ>҄ݻP0uVK!uD^U ~:C#IȘzww>v; nPvK.ǒ̺r9&!v)$ZʆB((fH˛u!,w'RBCKx@oBDЖthJRȺ6@ѐ-JMk%@4 vθ&{E0’vg%%-{/%lafXCԨD!TXihr}BH|( u)WxBP RPunx sj 1'Fn0Vn YY'~WꍞX}OB"^^SyJ ϘGbx*,Őa$ 8eoM -g@ؖPPkv/VeB/~wU"3($RawX&JfO*H('I3 ][o$+T.( v8 rݗ]J;$KvVuuU;5 fv$(H2!HFCF25Jd a@U)ҌJ3o׼ 2U)9YݲS9_={k? aPZ~W2[ѢY |xuk$IMcۏ'w ftEqI4WUńxv5E )?*|}0u-N$@Àb׍$dN3DC*(D/WdvlO$'ki(#䎵J詢fR߬Tӊ) (mHƊu(fB RLn[$kRTT@;{eeO L["ry`Cцt|g쯼G-@alK?ԔU+ȢU;.0þz#6Ր Ӯ,$1AC?wȲ00s?r,YSaVԹ=θо6K1)]`cRA7*|n&|qcuj u%Ba C.mt Rbw, V&FؔݵV}twἊU{VlE[v=a&Pή'K\{ Z3\9UzeK׆^QvV.;c쑞9FconH~+=\_:w$[#4sM ճUԑ,%.|Y]tWZ푘>6B6VWܓ%M訂s/pyCdTTBWY 꺮,P qbnkLd;5h&0hgk"ѮCcm٫ "Q⚖Gp,a!covn2xH}*u9~ny# YN?ŷ8q;d.]K|\o׿:l^o? tUX)6_|>&3\cs@^ʕ{]L/z>5Q bkrуp0)q8^Y}떆>hlB'Q5`Aѿ2 f0GgJG 9iEvߕMR3}"[Bz(FHPޚގ=+ѵOMá'9wBG辵Σ^{Ԭ 8Fx;mgltvKc.`Obt7'Z`bPjn& 6;;JxbK`xۡ=>}MzR:F.3Lp3Tqoj-ve2Xߐ [Aa1qbXk! ٴ$c[c;L̉ї\Nnpʭ )GC)i iЈLF/}ׂKYȳ,gY"Ϛ٬BruN(d-jm`1)6bmѰ3 嵠ԩtOE؄s زt+Q^\J*=wrV<ܬ6Cu +' MHұkjn""p2MeEzkyIlED.³ZYҐNB$XE6CC[ 5AYi avpuzW]'љv;I1ր*䄑h\Z?sdS˜6!rOgm-?Jk>6[kmdynOGiY34M4Yk}u/YkZr-;z`7MHdXq~e2̜ czojtK n(b({}@ݤz*ګTS"ʩ*@e*'dd@DՆ1ڴ0174[ zFXg-Xݻjz%^!^`|`_:빽~:A]G8u XoSM8SgaEfZ Kd]%Y4*v:m @kkD*1oK$" ICeIY/e,8Ė.uyrYWݜn^l#7ҒJkv) T ^jb)9 &0ala#Li&C ZZ6-^ґ9捖6~PtR^[s YDbL16ҧ>2̈4_W>#3NvNF 0|uiQm-}sޮ{'4n `iɚɚ4˞2\fgݹqIrg3Oe tUOӿ췅 sdHa>$z2ʣv<ڒEɔr{Q> Hw/CY$C10͜vc1j0!Yt*s qU H +C ՝xHи]v$h'{C`30̈́mBf/Hυ-<2˜5cK=a^5$.eQٜ7V"zb%V`Xa|YD؉ Ams?h}WfqǬ̿3^Ң\^kL-I Gڡ-%:J$3U!pak"MYuy\A[  ;]W;iGlVˁFhAyoG C>BU " Zg$tHݨi[AL7 #Wr`©[ 97_R Exø1m@-qqPmmhFôG7HRM&>Z[.:%Ĝk}u#\p_eL?k9cy: Ǔ$?/39)=wOk"?l?Nh3z(<]݅Њ|uB,`wN:ݹ{V_>}{)8l&!G)+{MO4fr]ܞʼtOVy7̸\=7n鹋[V|SނD̒so^+oqfm_(zwIr"@Ն1'<J{lmĺAK_"4QD~"pmy$i"he2 0*c%!˨]ZZ5EG+vO av5IxR2,'4HS[# T(8|?s9E9/th)\.JR4xoɂ՜;[+G㜯L@ч@!,0`/4saEXfA5,c&.Ѻ%R 4:o'2ocؔ4em Yg~/ *ĊMHU#6%,HfN4"~rhx2eE:Uު//bk`o]^ڥ,=9gǓ NDpO!yD” 5 ZǓ]LsMda}^B-I}f+ͮ65h^^[a +QVBVAo~#'4[ca6i'T<2g$HgX.A'))-S&He1!qEY f@[Y!_"1Bʂ(ϗ%iְ\(qFJV\Qq-MH'6Wib,\iD#1䭲Tkޙ{ӾТ Xlɹd;NhQ4\e$^L>[*x"1s:z\Jk5M6p֬%q+čYH]Sƭ0O\ɨ=ByrQ Wt=3uՆ1ޏnx?Jfcǂ v;dʝE(?ruT;` ϣ!cX/}E@Q# c=Q̣ E@ԻB<7zJ1z98k:'>62>CT*!9w%@CcMqr8 ,/Κ $0ƤFgֱ|uhKcnM N,F輏ކH|*buN]Lw PږQY QXP ~IL\!CLe:m&tbڐ ВH^%+[ h#Jz^ata%'h_fT N\# jIeZL= y@!ڮVVPh#-ۛLfن?ܹB.?`P TmPuRKRR˄u)rY!{^Gan ݕ˂V*p"rEtj0GY>JzZGW Sjhrm,w AT>L4Ox8)n~^׈ 0w6GHzi&Ѳ29zyq''?\>|~p=yzr{NW]YoI+^yD^ ] vm1OֵՇ7%e1L70beF|qRx+fx t`Jز+/k6z&;Jأ:ljA|`1S͜0=Dj{>jFKUK;7i24YzVhͮ'WG}lܮX9#K75u hύ)h֣0_1ymmfo'q5NFFc_~M+e[(Y{ $/q"5 w_r4>&е4fIT .#?FWL>oa J[ǀPT[78}MóĄ Gf x%QAi鶲 ގd^["Q1M]L^G.f$D䗫JB`ՁhZrzs)R-|tzd<i|H1 / 74Zqo~ upTq* !oדof =r{,YSn*䨠jmEeA,:;9(cJ8iR2ũɵU&`){)_(+!T:S5[O~)W,|+I 'A|4O༒>9AkNUPl-- /lXU9B&˜cZݻR 9X>\,E~nY4i$;37O/2T"̍4:hȕM.6⭘cF ea֎F^M q81 [W7%֍LstFn1a Jm0,? k*NlDoǕw;Y,c4/WڕcGaI=Ec@̈գ&NnB8q A(%ߟ[e-c_dzA^+4ElĎ~uHJU^q8QVϞmG}[ Wq팞̵ eޑ.7|k9R1J8FrhvZUd]NҲ1pRH+XȔ`䐴pyXX^rG'ZY4uYgM.Y}.2KrQ3 tdN&wh/| P $DK8YkA5wX)1"OB&SS!P,ŔD/aU݊JW,o+f 9mT%}6j.w#8'1p¬>xC*Y h`&tY Rf"Ett9әS[ 2yg &aX"mXIpC2,ELbd/å^; )c?o`w$ q;bXW<ޖ=8Ņ!ŀpFH62$#\{/Ǜyո;o҃&QgcXQ'.no-cl 0[qɴC( kwQ+C@t^(q]J!vR44N- KqPZ;&K- zk Mrf.~4Έȷh,$jNTA\j2-ڑ}G37:e B{O0ҐKkSIiir Ӈda韝c9o 7Z1`X-1W=m(L:7_UJ$~<נu"pn&\›^塁7= ٤(hG Lr )۔5&;%D6UB Z-GeXJ3m Q#Z2<1xFMs b-R3cpoOP1.0 Q,*OPPɵWA#'((L@dG j&'@c֍U.Jrt'VVZpKGz.&JO@,x(H`DR%D@(tJ7ZV.CEhCq~'?WZ 뙢T蟯.kYu*b荩d/ƃr!r`$cc>n~Oѿȗ\TW&}Gޘ#5L`KʜBo<]@.FHPA{xL-x)І:[J)麦kK.+CZu2 =+z`*/FDZv&pj$\(>"  CƦK9"7kT0T-_PY.w+vLUbc jʼz[cjZD"{7> 5p42Դ@g+-bCpxuj)RnPrKiy!oExt׆V>ITwns$^ьznMH_Q\qZ5} NcUTmUԆ ң:B!aF|2 M,3"RBT33 -7js5%21噭VpܬdfYt8Nl43%vbܳVx$O;q2cʏ=oŘ-a\>|-b6yɕy^;lW?_˛09"l(`~)tLJYf߆""5}ro(HHuzraoPOOWT):=(%' >pwudmz]1!mu8qWvꆳnߗsM0lUf닜dKUgr|97xsVmHdK3$,LYr+7(??7uL]u')j>.^: oxByH};{ J ~r2*~r;W$_.\f8gxP .q;3 *J>7k |~QE>dG@Zuvghiފ@80@oæ1 ጯ'44?Y}m6vYƬ3Ef}t^3D=6_Y1Jܟ}ݐ]jh )ъ8:ʄ!ƀF[Ƹ2ifÂHB,=~uAK%lׂGtу>Ž@MEy+=J>N1STˆDmFe_|aЪ8UOus9O03E"J.);%GlڝA4~؂EEYкӦe:b y3-5]1oПx XviyXEqXmfFBIΧ[҅% ؕPQ!8f4CֶPҚR,TS+-fMCࢪ{_g7~[˶ͨYɌ&Js0$#-Ϸ4μRlqd1t C%ɼ+t%l2Gg+M ?+OYC/'S[4;R9!bV6U LHT qJFhNfJBvo}9\p |EQMN{,e}^ EY7qᣞl>mnm_b`8v@_T>C 1 NcqENWڨJBtJ [ tG5*y Y((z% YaujQ2Ҳ(!B FɕY *(:>䆄:.iE]XbZQQS(h(ZS.nUu,u"ܰ"1?% c쥢s=q qbTK=wVV%T '$m3-PۡINd'|9fHrmLoukmUZb0QSߞpl!14Xי#qњgAfVv~JG=9[)=FT>A |eF])/O<4zIۼ$}F/WZIpk^iQ=^+C ٷuk`g˨w9mtC>D38;JvV^rPZ+!%ʕ[vE2㓚L-?5Фzi5֦T[%z~ЎkT@j#j9vE}e.fw_ȥ$J?-< EhddMhkdj?B@uި[L%"5DBlj\ں[*0KLmBJ@qbFc׼Zk7v&5%@ w=r+HWHM6=i1Xkŭ}vZrd6N|{mM>8n#q^|wq_\OޘV*SAnW?eȕ"[s*`+*ϋE X1ll㿇_ }'k㋠O N5hSFOhJ߷Dq'>GM_ #O5i:\}SNi>ηvRhql)]~2WAt<1 WѰui7Qudxx~hMW4GjCvWg zls䉿ә~a0+w+nr=(Y}x (d*hal*(Bv&~4O^#EO)`I9Ɩi8e\92q`_bQU(O𞷾\8)[F6/w ;`/'x-~jq-= =8')͞{#CKQ_ Gp[x&wrR{+9y~UC.)~(25b]&D "gD&?~,wnGTF^SH6 A>P7[B`(2B ܐv D Оp)o(R<ʰvJQLHF=* YD#P(*,Q-ҁ*$Sifiu)&4AVP Kp.X")PVebV$AlYOƳJ`}c=J~a]xA$ɒ՛gY)ƳJXv~9[$'PuLe˿[cc_Uꪡϔ"͋ØhQqJͫ Ii=α>t`[;tgb.0$#; c*ſ~-[IJ"ay"b VG2HܩUhU>[QU\(VH>(Eh\c9ێ(QgWY#ˊRe>:h 4%ƼcޚaD}j5F`K>+ +5XQ|ch >Jq[AaԈ_sRσye F-$%œzExMgz=O _8ڏ ;PgmYm|fLeb-g~l~uzs|](?/;_8/!cRַ8<6  co=qg{^P1/o,_~y9yO%zcK!csu`1ם'rZC]ӘVV uiZF}DC]2O|MiD|`˔n-hW,K,}ݧ+x ]מD-VS]$"[?S壿buZ-TMD㯖Vi9KjXoIyb\̂K.-IpDM}3*zu7 8c|K>{{wf]ʿ_L6䰮#Ud<*3m+[*OD"E\OAX;>Sz_gfyzA H킦C]3S~;. aL\Y.D#2\ih%g\=7toq޹qͨ϶7Y"_Ƿ~O?t]Q3D/6Ǐ5[ywbDfc5Z`upp][w=Xq4,j2{'8Y YIYVrϗ/Dzːp r gnd-P Wdd\U % 6֮Jݩo_+e^yfbh9^Rzfm8X(:0.h߶8Bh2Ͼ̐\Yt)Ws m=чޫ##=&Dz̦  I@y ˠ\/xKE+ riɴFP)}{/bp8&7dto@I=pǴW(kݲO(MlO"e 5$s4Z,guT;8?~5V>f!EỤiu˯K.YtA(/( l5(G\U%Z]w:;ѵl=|G6_YҪR(ശO7AY"hUYJ.HPWuA$4~Ш}@ wKV9_}Y`B[=ʸmnBէ9XmuD-~Ntu P`CW֫ܮn}x&mx=)LBcTm«:pSݫPS6Pr,V>nMp\O{ML)nbPBPI6FC%*p1L)hևUn{ۉdӱ6o9TH*x@ڗ I^ "T—1ޓ+wQ : :kss%V2F5 /#hZ,3TT @ցZ@_DZZ,DPl%GuzElzbwqǕR9ɺNjfιNp/q1n޼`s m*zDJkIFiQheYkㄨmǺN z5餾8xVOŐcX(MpXl; dIfw4_ğ{ۈ$7r#1d$<{&IYHK.玲ȣ|+QXEy0{0ԪѽnICC !Td-.:.1![ L%Bm1@b}oBf΄2 vi:/%t9KPˣ]O?olP6=h,'cSƖJvtc;K8HXܫZY#,WXlA"y4m_.`0Wm A;[кKeLT,4RdadQ A\<X-&^}Y.:N)V@0+ʏq}fO4%jkVSkwZJ"`?ww-KwnoҖ4_n#;%c 3oVu eKxBT؆w zlěx{՛&RE)"LD'Nfٻ^C<4`5YY Pe`WkiSY$)-*’- 6;T6AR`yM!JMy+9ںlhs_u:JB{[BM0Q&nk:yz%cv ӏ-Lc?Ld;iNLYjƐ4j30 V 6!wLDdFrK $.i#pHWΉ~csCC i7>nl.܍HcŴoƼԌ@[炔[\E=rڍߍ9ўFXq)'Z.+̉Yܐ=5o@՚Qll6)ְFwS1=r_7_}F9n`n;Wej@8i&+ɠ#yWT\V"\ {DVAn'+>u5vԔg?4XuD>_Wҏ 3`XLj?CdA`7v&vdbtjp^u)%Y /+^+} &B*{rA:$~sPu/螲"׷]f>^W$L2IAVzL)1gY{۸+ŕ7 M܋b iw0H'1$y۴#bfFۃMl8<୳H"5@(Z7Cov -uckQQ]ٍ?tX 7QI+<6#n~ME} (PJ9J,Q4J6A1i`rUx)I `[N2Ph(.B#7$BcPu^ T&B$!XhevGH/'?:o\~Bi$=i$719^bT)g&ru43O ҋ xXCgƣiHLűs~lz] )D|ȥ$n1lOd J)'Vi2ic6HIDr'TE <\$ݩ~i! K^I GZxjlA |p]D37<==0YB!?;y{n3ېJ@L{_dWXA'؋複|w%{ `}_C-y[V,r>:wok}J5'/bLx]h,o[:nh&_:i~a6rTDODO3FԥXbfCn^Y22^EXU}Fy*q5nǑ{}wa#qyLǤxLǤ1i\ݍX*̱?UNA}&YҀ5ZRB!);Bl^bKS >A''dlo❃պj`6fZW+ZO*ڮ_̭_zY5\N4b82~~qwS(ДeQ5NJS!jcipHwI2](I9qV̩DĢ'ÕI!83#s3>]7-Huvmt]ptf SRZ7(h5&33ϕgZ hǭg"ц4 9AKbRn]p_po}q(w,\ϔ&,f"ϼp8?%Sy}lwבDd:c?O>o3h8ڄ3,Ȇt`"LC41Y*,Ң o1t>yL)ψ?fsRg{)x6~mƮ;?vXZX9gwaZ 3HMW^lq|PT44bMmt%f,b0P"k;}ڷܼȠUdk90cx\nTkr|cEr4J0>gʼK]k~buXP?OTqWa SVpû__1TIym:ρh~sl4(wſ/f8|UDARŻ7K&g♧ 9)aAh'8: qJ:s)QA;58I$$s8q4eACBFr\J `27p $zfsИXU 0 ObP)P6/ ld@4h f<Л4-IN:-t,8i~%x8{∲&0B{]gZC34:#24wXj͝!:nˉCH#4# ׫M>Hc xzG5'Wj'0Lrce˻ 69gĂZ"oIY-kP^3Uu,Rfl"^4E2z3ՌAS`2#|TBnߌ} :$bvnP1[է>od[t^ˢR"iǓg/5?Uwa@:(m{p< -H<^1+ĻH4RɌe{!tE|95Fre qT-c]T[vW!ƷkГ$N*˜[efqhRH\ WPR"X ȪT[pG]k@.š(CXM{y7/ :A'D3}ԡQC*u~mAÀ!qDr);Ȥ̝N&]}(!<RV+S̷ #n4>Rf"g]ṾoRS*ci?ZFjxZ{F"LM^D-jm!1p I\/mXn.׆pb~HvR+ukw_] տd]8o^fݗ隠?.[nRv 3$u7j^b#y7;?5;jbtsM4NV$:br8t@ kҰ+M67(U T>9hmófrf w6h!|rB`"NK竔W:_H גn7H#jBrk!kagY%v;^9hvMC&Di5g{^ؼĘҡV!GMhhʾ{z9poDONl.Im􁉧HxIB(oZbOJpRxMjWK@0fa`kӰ M{_D-SՂ~K!Si81i{Ur=-Aڅ#[tj#h Ve|'UX&TM{J`aēe Z:ٿc!ŕ쏅BYS汐 HR}&lLj/.QUYʅf1kcSV3TGV36Z~D%49Y$DXj`ps?Vk0#r@R48*%W:<ԝ.W۫#Po/tbYʲ\sH'иAg:W1T/ٲWe_ u$^oH҄雼BPt.FԴKBX=seILf (9N Q*Mm5i;.էxIŵM3Rt{/]v=ҥ wRH*sp,^ Z;nʨ?,X،ݱKÔXk)Ż8)t>Nfq~v}ܢ& zՔ]ÕI(dmUQHB;YwZ2t!+Ws3t蛢JLb_lzw

@b N@9=OMdy%VN2NӸ;IlPܼr~=$FpTg( F^znk $02ϔ&AdD2RH`H@6aj*ܲzJ P0\Gjk]Ng.9Z:)BXN2\+I% Cf Fx-kFȳiM t&D.ȆXV٠\( Dhr u6υy,=0k@r$h",e4ttV+jQ dF]:Vi TP k]#kN;$ XF\mX'-8a x5hg4(1 >bKQڄʼn2/.Gh}߹Ea2@UQn¹#"f!IX zn|kcg"lA~5xp}}uiwxGXR#tb^%@1.}0GnTkr<XNr(b.}nwi(PM&r Px\h73B ×yk;J<(`м T+{PntS$:S;4C9xZi 5*G˝aІZtxWzzMXMW.*ܿ_`s|BݡNR ŭʣwm=n~9wr?e&d; #C$9'ŖfuV, =V}U Ikܗ;t̎+0*pK*fe!g 91?N*!)*CAX V:XM 1 =Z/[-,KB\"ޥ_>=@ngi)J4y^ʷkui3ʂQiX{ Π>o'`>p!7Vs[Jm ? VoaK+Qw1T sQF%f[A.oe]j7qz|X]p>qp5ŦqZëy0²% *A8i8q _v8_Lyw >KAnO,,0$Z,USV;A7{vV%*{~RxG~ӣWDnWmW J{7S0\<|*S ~n8w EurhNDNU-[Tv U4K|Ĺ:w EurhN9ݲMnА\E&BuuW7DokGn O)o8PDDn/sE+m%spPI)BTb/ [ f,Ymz+V!Dd0 *dgPu]IvdWMPǨlf$dGo 3&r XIv/p$UD$;v;"d6ĚtTLQu֭}(B~Ym"`O>߬ڪ?6D`?q9lr+v,G>V۹[>Uj$S=9 qR!d`mϑ9AG0lu{8z"z0vcRtn z\т[]g&Y{!Sv~8H )BW g-@L;'cb̅c[X:PrCuUv!FuU;wȩlصȒ8MHJV~7m|4cJΥnZXEE2q^F^T elȞ=Up}q*X͸ؘE)j+j-ra9{I]F&1j#%8%!/pֵs!2񙀘nN(v6-i}t4trĄ'R4&5ltT%(@̰T;`B5G$ AgIaW}JÍ}+5mDeKFh k mʘOs#$nSL9 {s(T%O[UBy'YX%Nڲ(]P;`+> :SU 3^&&lMPcQU͑@Qw=X Rie0S d ބN ֌P &.9XЁRCyu؀ C04^E8# ^a> Tx|8  Q-^P}5/.V`;2B)mr4iCpt$DAL̠sF*8* lph;'.s0=8Ryc/cV[=,쵎蝔_8PjE=1-8J8)LH)uF ?Co=R S@ORړR`.pwGh( CAjVR4Zh8uܱ/-xFٟ͖(xS+'01 TùJԄ %jjj2[]!)V*z=RV r 8~FlWZSA+~^%T@%S}>iКWH¢`˔Ies1&e& Iİ=F3!N}ᨫ8֛w #?B&O ו$9I*ZdU5:mG' n$ \ jW)E,u1lF"p²G:*+#{މNY`%2`KN%fRM$F3lv`hj2rv#7AݣݑX&J=S Q%& 3G wdD: K T2i @0 U/tmuRUqO7Iڢ4UtL@HN6r$rTԠ*5q٠qrqiy'~N0V[J!M)9`PAn9o/Fj:pebv>+ntw)鏷#U@m~rZ.C yLO"ޫR}z^۩~g/go5Nky+kgfF{e*$У*64;vqNSo2k@8>:hd9c.$V̌sʁGr7?V5wS<8Y>]|v!ž\{ ?5KH<'P7j:E;u rb u3laȍ享BCSHm=wϑ\V&1Z.ːG#RՓw`b56crSjb>[Te+R|;RK؞؉c3r%P>$Ǧ!ۻ,tH)qy2% T3/1eGd*X1e1R{ɧ5դv]Ҧ[)_})tNVuXNA5x$!e3qyWܖT+P5! M(=$;{ s9v}/OXmw86{;ly+x/டN{< WQX&7zfW3lZ]|8[ fv[}Jr4p;&3#ŊXx @zۘx$bKHX D;2c0+2k)wl6"% l.y_\Gg.`/g}r$Ǒs2j'Oj[ā0\@]7 .)d >PTZ%s֓F^ jq,qTSHG X`c*8Rҭ@;0ad΢_Fls:AǡRI!{q'2L$lyrr3HOx`@_/ˉ"AJ)I7Oe3 6iw򥌊๷̃!8nHψ*a<މK p|O8Z PdneAIÃzsNeTg: 4c_vzwSMt\Z QЅ%5{4 !$wbo@mGxDsVDQa/9P8& O@/Ձ^rc1- [/O0L7Bi N&X K9:7?QL@UgB>Qd8 F+,&ă&72hYz3b Zʳs`vJ<"uǣUDHėk=#1bCLbۏRqesjϔ^ԨˉgC@ >k%%M/<+2bw1`g ˂DU~v4B:ŤZ+M(6:.A[MNJj , 枓X EΉ\ayG{K7lFjM'RLHS$g<\z02I4j)N#\Y?KQVJCנg1E;'nk9G is*J'R[YZm~tf!-׺)( z伵T_Ab }+֧n1P06_\8#OݣI坄I{˘ ̼t/<' [RK?A üyk9" *#X~Ɇ-JK%X>p0x:Ji.!qh~<~0h1&5TB M~Gxk'&b|% z0k-ͅPIM-Bf"А,N _@u()=aOu4Ué;UHَ5NYٺjvk$U1hÿkAo?  ~Ժzf`Fo5e׶NMӃ-đ$G\IZ:>MIIQx[;yWnk٧#&qK<:;Dj^[)iVg/9n?g_ T.9ksdR a>p]ޔ04Û/宆\i})vR_PJ|kҭ<:^0אV ^ڱVSͭKAuݑ?/}ut_ ,2u$(87HP"0+`AVBye>az|,8Mo?; ] ]McnņܰX?̊5{[_Zh=p_cKC~@')Sʺr#UƝ /RI+hPB2h*0&` sυyŅ(gTaK!dXTZZy7v=k@_f~2_9öuADS1w?{_cއ-T7۟|OQB/Q.mx^6I\zCaǧO#ГG7sxl}i z ..K_nS.%û1ቘ>-Up;<Țߍo܎F_2!%sc3y}v{U.A'i0{倣g-s?_ ŠaH""WaW" ݖ譸I|)JjPTcO>Bn"A@Ap5FRh`3c.!|U-b>\ZOB022;ddDԊlxz[ZK"lX1{Ni0T̗/ rn}ҁ:p#(+_0No?#W4yshxAGl LZJUCgR&F*cQ hE?"G?fv7TlcF6W?37)!n'_"™ O 9RS'JXOFG9 V++=#W(۱Kt\\; ]Gg# NіKϼُ7\+̎ H`Pݡk\Ĝj")g_{r4g9ג!I5q6;D3ؙHs@bv !t$}8%2D(b.qL7G88)Ac+EN>.GsOm'X7JS;bP@-,,  !+1B&AaCq+UD׹?WXYC8D5"GIkrTҴ9oXFv+ReSDhm[P5`%}Yh]ױLG#1LZ`R=ٟ䘈G%1{;%Z`'=Z=J}^lZRđX%VjұTlqTz' Њ#6P'x5DDE59;Z9-KȻy_;ׇ2=\_=z*(Fbzns7=ø7ofY+7B4*AHkҦ5PoAG=J2+Y@~HrEXk$?h#$.ҶFq1_aQ(uq 8>0.źYLNΟ^Xaké)9MX+pK]چw';<9/#hIqd|Nd_ꜹBs6 _M5wl+ݳx7,`0MǦ\ĝ\]sVp#vzxKѿOR>AqvjDYqFK$Q$OwD)VIedUݜ U(xEW-G9йHED>F@W+EQmvbe@L1-kbYDiVZ3fptkQڈ? sA[g6`\]YoI+^;3Iy à{X "OuTbF)(Qr(G&EdFF_Oג6 o0=d@AoiޯP76:5#Thud*#::|q2g(xt[#^`{lLxї3!S8bvW| U [JR? fሤ:J#R;bD bJCyh@mŧC]g W 2iz7MŪ2gđ9 }i2)3,1C V-d>sOXp|;X8cΑP1%1mSkbOf|M!Q^!5[2E>4sX ϘHC|Ƥ I#|z#h T>ƫhFxEP0LK^M!2^dʀ3r~nj*TO5O.U;*'WWzbKQ -tny~YUx:A'v,Fplv˺cKg# 0!}s#fO `<^@j";G@3:!tdLv!&m  M ÷˜amVۭtMzW>b JWlxAКRre/ܾwl)/IFzJ#@Xs eK#dܴ|ڵ&oUU+|6]GY]#aŵy!EYD(%]X[[hA{fw۠ 5*0l=SKHfL,n|Jˇʊmwc|h0>tvKf1%W6U\vLHE-9bn*幡!i`)%CBĘZyqQ ҕ`9&E(,NJ4`ՏD*,JImQGQ%s*0-$, ςf (QIC%^˺i/f%~ebU|r ^.gߩۿ~ivé4h㛯u]r4KWON yYPN~o7y2/VxY6o|gh gR}+,5rf̐ ͌@g+Ezyzm\Ju!@Wn~iJT¹[y!8H c_֐p1i>걷xՋVAp&cgk F=ζ oVa^o~]LUd^ah$1SIG~tqYs;{a]ؾ6z0b6ϲ hĮ7|!Jb7gMd4S!,sRcAdS?5(-3ag^R(_:9O17u/GzC~ ic+"s^G4hN!䴰~bQSvKcd=mU )ͮ^ʡ:#hD5b^CZt1U] '7Ӣv]!}q(9@LBȂ #%Xle} ۩Fn"#Ds(`/U I0}TlK }(qeӫfUͿF7l,*-fWYV1Dr}5u.7qBL y- U'jzQ7ѦT+(o[ljjsߐ !rxód;(%SPR]Fm 5Gm?NY=H[,b lw:29fdHU"Pbpz 3P<פm-KR$c!,R]3 T|P^$*Bs@<$֚'fMUA+jv\gHz2U/`<0dvDrHPF;y?UlG[~vJym"x96[q k vL0sm+xңG7\tH1W,_5T|Q#۠eYD0$ӻiaSS ). ɛ8p 3*y[QM4XpiQR/燣zhkFzҟ`OTp MyY r?:(YZ;2^&ݏZ,ڋQÃ*UiWHRn$пeAd[^ k_N+im!wr l??q̟Z去R >gr~-c# m@]0;0eVz:Ԩ=mлx2|w5QGN7/KcKh0❍ʈq5jMh:I`,B$k(rk%9CaƁ/Wy"VH1HлѽV6B?ȡ4V+28-Χ)MҪݬSF80 !) u(dS9\屵e¢80 U;hFe"4èX18vW6^C%{E "r~~] ae#PCFݦPI$Mi{f-0T ;mRN R,-~sd$1@- LEpi$;D E`8^Cnı3CJW*'н@EhqE<J4 mtateDA91'ϑ 1t8Xy$(1[1T(emC%PctS(+bh.CchmTupGR]𣹉P Wqq[rbި3 SmL mΏuBS/`ee[@|6N2\}'|?"Lq @Sڽ%FfEM6R}_,Kr!@\\Y%aۃ|NPֱ/=DWEf,ag48bY,R҉+0>0=5J7I_RVʒk#J3rvk⵿tKG Ņ?2f c1 > D Ht6j5&_\Gha (U{G;K#EѐL AcA%B9-5m0~miY.wH`;݅mCvHa Xސ͡?zPM9͡*uՊSEI@C@1>h婵[5fiA.'lxIYK`&3nSo{F0!BMvAM*RZvo-v&mX.˗ F-,Vl"m*}Ww+Xă%^ ZA].&'NL-)ʿ~TQxEY=Ԋ"SM0/+*͍ޘTXb9\LX С34L9 JŃߊV qZt}p5NK) L)h[qʍUIU+,5Y 3i?3$VZ\; Em5$.o ~M ~T] ]=t8ktnz~犹i8qݳbt AKۖW[k3 A[h*{Z|Y^l&;i@BBTAIEK^0LfC5D-e*StW@鄈;*FHMbؘJ *KPzaQdr REHqh&5!eKB{r>_ک]]' ]*~l?nby~}z Ŵ[{R_%mKl;oH_32vǡzHޙ]{-TWvss\c1g8-VOJzG>>8tOI%DḺ |:_f.I3;- ?ϕvK|/yũQ>_[a|zJrZ#Q>uiqҊ/(_?^K{^H6e}926V kQm@<[ e`RXhItӬӝ (k#J'k'Q@6 \j$sG2Vf<}~߾M9aW#}?oX$|%6[>k3W~n^Vy֎ nɢУr OVR YgWbV7dR9rX), ߹Ġ @M?;'o]Nj:5V}:6ҙ7{Fj}v;#MАO`66` zH,sC]jk[-ܞmNr anKƅRA.,ʱ2W,ϏK[ոp~Y|@٭m?T[y;?߯tMY.f~=>I'Uo\=ϧ'tKjgFAn,hאZ1FE g1&-V %>]׭)kܩBZBG3W:v$Mym$j.~Rsx`9cb@\U鱗H2GY/16,ۤ_kmE%9>}  0\ \.7PRE{]Zzp:Jh39|酳Kϼ8K]Fuي:yז_ݭX{tOu9M2f/#}_?*zVTj{Ȏ{?\5+!â5k>5 /PSdeĹSe)Ѕ0:[k]R_aC&ǃm[Cn>rAB6w`z;xH &' ̮ʜsfWfxN`?W} hjߋx!$pZY[xa>.fjEG !A(DmOEk-W#_Ds﫯׽A U~􍣠ݗCϵgR@!d "7'}/jyz^?JeyWPxMдHAQVl'Fo}5-b&{u`- &F3UzPTHoPI)G\{J-ZCsY !kJ%̕J29e 'E  Vy9aj*ybۤE^M+ޫ"(qET:ACC0~ZyUW> WbPy8QE$ Wqݯ{\2*Wbi< SITCe3TxDBtrOtS$ )7,}h'%J[-]TxQyKYvwbOZ, gR+o1UDb,R5fI̘"3-/ -93d|u~"GYhZp Q1ÉL&*$1s$6j+BUkj}*ڱkW% `T5ea=var/home/core/zuul-output/logs/kubelet.log0000644000000000000000001735060515144462714017714 0ustar rootrootFeb 16 00:08:43 crc systemd[1]: Starting Kubernetes Kubelet... Feb 16 00:08:45 crc kubenswrapper[5114]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 00:08:45 crc kubenswrapper[5114]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 16 00:08:45 crc kubenswrapper[5114]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 00:08:45 crc kubenswrapper[5114]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 00:08:45 crc kubenswrapper[5114]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 16 00:08:45 crc kubenswrapper[5114]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.090110 5114 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.103956 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.103994 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.103998 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104002 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104008 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104013 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104019 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104026 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104029 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104033 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104037 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104041 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104045 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104048 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104052 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104055 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104059 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104063 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104066 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104070 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104074 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104077 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104080 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104084 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104087 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104092 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104098 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104102 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104112 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104117 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104121 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104125 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104129 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104133 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104137 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104140 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104144 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104148 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104151 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104155 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104159 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104163 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104167 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104171 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104176 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104180 5114 feature_gate.go:328] unrecognized feature gate: Example2 Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104185 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104189 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104193 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104196 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104200 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104204 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104207 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104211 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104215 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104219 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104226 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104231 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104234 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104238 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104241 5114 feature_gate.go:328] unrecognized feature gate: Example Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104256 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104260 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104263 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104266 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104270 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104273 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104277 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104280 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104283 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104287 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104291 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104295 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104298 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104302 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104306 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104309 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104313 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104318 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104322 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104326 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104330 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104334 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104337 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104342 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104346 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104960 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104968 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104971 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104975 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104979 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104982 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104985 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104989 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104992 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104996 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.104999 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105002 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105006 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105011 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105015 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105018 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105021 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105026 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105030 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105033 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105036 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105040 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105043 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105047 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105051 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105054 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105057 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105060 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105064 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105068 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105071 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105074 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105077 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105080 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105084 5114 feature_gate.go:328] unrecognized feature gate: Example2 Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105088 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105092 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105096 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105100 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105104 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105107 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105111 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105115 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105119 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105123 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105126 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105130 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105133 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105138 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105143 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105146 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105150 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105153 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105156 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105160 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105165 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105168 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105172 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105176 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105179 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105182 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105185 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105188 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105192 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105195 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105198 5114 feature_gate.go:328] unrecognized feature gate: Example Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105201 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105206 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105209 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105212 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105216 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105219 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105222 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105225 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105228 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105232 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105235 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105238 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105241 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105265 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105270 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105274 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105278 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105281 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105285 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.105288 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110566 5114 flags.go:64] FLAG: --address="0.0.0.0" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110594 5114 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110604 5114 flags.go:64] FLAG: --anonymous-auth="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110612 5114 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110623 5114 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110636 5114 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110651 5114 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110657 5114 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110661 5114 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110665 5114 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110669 5114 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110674 5114 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110678 5114 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110682 5114 flags.go:64] FLAG: --cgroup-root="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110686 5114 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110690 5114 flags.go:64] FLAG: --client-ca-file="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110693 5114 flags.go:64] FLAG: --cloud-config="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110697 5114 flags.go:64] FLAG: --cloud-provider="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110700 5114 flags.go:64] FLAG: --cluster-dns="[]" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110707 5114 flags.go:64] FLAG: --cluster-domain="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110712 5114 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110718 5114 flags.go:64] FLAG: --config-dir="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110723 5114 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110729 5114 flags.go:64] FLAG: --container-log-max-files="5" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110737 5114 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110742 5114 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110747 5114 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110752 5114 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110756 5114 flags.go:64] FLAG: --contention-profiling="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110760 5114 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110764 5114 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110769 5114 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110773 5114 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110779 5114 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110783 5114 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110788 5114 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110792 5114 flags.go:64] FLAG: --enable-load-reader="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110797 5114 flags.go:64] FLAG: --enable-server="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110801 5114 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110806 5114 flags.go:64] FLAG: --event-burst="100" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110810 5114 flags.go:64] FLAG: --event-qps="50" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110814 5114 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110818 5114 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110823 5114 flags.go:64] FLAG: --eviction-hard="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110828 5114 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110832 5114 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110836 5114 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110840 5114 flags.go:64] FLAG: --eviction-soft="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110844 5114 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110848 5114 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110852 5114 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110860 5114 flags.go:64] FLAG: --experimental-mounter-path="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110881 5114 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110890 5114 flags.go:64] FLAG: --fail-swap-on="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110898 5114 flags.go:64] FLAG: --feature-gates="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110908 5114 flags.go:64] FLAG: --file-check-frequency="20s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110924 5114 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110933 5114 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110942 5114 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110950 5114 flags.go:64] FLAG: --healthz-port="10248" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110958 5114 flags.go:64] FLAG: --help="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110966 5114 flags.go:64] FLAG: --hostname-override="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110974 5114 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110982 5114 flags.go:64] FLAG: --http-check-frequency="20s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110990 5114 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.110998 5114 flags.go:64] FLAG: --image-credential-provider-config="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111006 5114 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111014 5114 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111021 5114 flags.go:64] FLAG: --image-service-endpoint="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111029 5114 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111038 5114 flags.go:64] FLAG: --kube-api-burst="100" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111046 5114 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111055 5114 flags.go:64] FLAG: --kube-api-qps="50" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111062 5114 flags.go:64] FLAG: --kube-reserved="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111070 5114 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111078 5114 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111086 5114 flags.go:64] FLAG: --kubelet-cgroups="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111093 5114 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111101 5114 flags.go:64] FLAG: --lock-file="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111109 5114 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111117 5114 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111125 5114 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111137 5114 flags.go:64] FLAG: --log-json-split-stream="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111145 5114 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111153 5114 flags.go:64] FLAG: --log-text-split-stream="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111160 5114 flags.go:64] FLAG: --logging-format="text" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111168 5114 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111177 5114 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111186 5114 flags.go:64] FLAG: --manifest-url="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111195 5114 flags.go:64] FLAG: --manifest-url-header="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111207 5114 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111215 5114 flags.go:64] FLAG: --max-open-files="1000000" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111225 5114 flags.go:64] FLAG: --max-pods="110" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111234 5114 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111242 5114 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111276 5114 flags.go:64] FLAG: --memory-manager-policy="None" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111283 5114 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111291 5114 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111300 5114 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111308 5114 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111327 5114 flags.go:64] FLAG: --node-status-max-images="50" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111335 5114 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111343 5114 flags.go:64] FLAG: --oom-score-adj="-999" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111352 5114 flags.go:64] FLAG: --pod-cidr="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111363 5114 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111376 5114 flags.go:64] FLAG: --pod-manifest-path="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111384 5114 flags.go:64] FLAG: --pod-max-pids="-1" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111392 5114 flags.go:64] FLAG: --pods-per-core="0" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111401 5114 flags.go:64] FLAG: --port="10250" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111409 5114 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111417 5114 flags.go:64] FLAG: --provider-id="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111424 5114 flags.go:64] FLAG: --qos-reserved="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111432 5114 flags.go:64] FLAG: --read-only-port="10255" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111441 5114 flags.go:64] FLAG: --register-node="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111448 5114 flags.go:64] FLAG: --register-schedulable="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111456 5114 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111470 5114 flags.go:64] FLAG: --registry-burst="10" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111478 5114 flags.go:64] FLAG: --registry-qps="5" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111485 5114 flags.go:64] FLAG: --reserved-cpus="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111494 5114 flags.go:64] FLAG: --reserved-memory="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111503 5114 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111512 5114 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111522 5114 flags.go:64] FLAG: --rotate-certificates="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111530 5114 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111539 5114 flags.go:64] FLAG: --runonce="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111548 5114 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111556 5114 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111564 5114 flags.go:64] FLAG: --seccomp-default="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111572 5114 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111580 5114 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111589 5114 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111614 5114 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111622 5114 flags.go:64] FLAG: --storage-driver-password="root" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111630 5114 flags.go:64] FLAG: --storage-driver-secure="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111638 5114 flags.go:64] FLAG: --storage-driver-table="stats" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111646 5114 flags.go:64] FLAG: --storage-driver-user="root" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111655 5114 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111664 5114 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111672 5114 flags.go:64] FLAG: --system-cgroups="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111679 5114 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111693 5114 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111701 5114 flags.go:64] FLAG: --tls-cert-file="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111708 5114 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111721 5114 flags.go:64] FLAG: --tls-min-version="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111728 5114 flags.go:64] FLAG: --tls-private-key-file="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111736 5114 flags.go:64] FLAG: --topology-manager-policy="none" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111744 5114 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111753 5114 flags.go:64] FLAG: --topology-manager-scope="container" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111760 5114 flags.go:64] FLAG: --v="2" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111771 5114 flags.go:64] FLAG: --version="false" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111781 5114 flags.go:64] FLAG: --vmodule="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111791 5114 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.111800 5114 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.111982 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.111992 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112001 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112010 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112018 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112026 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112033 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112041 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112048 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112055 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112063 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112070 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112077 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112085 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112092 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112099 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112107 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112114 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112122 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112129 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112140 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112149 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112157 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112165 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112173 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112182 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112189 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112197 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112205 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112213 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112221 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112228 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112235 5114 feature_gate.go:328] unrecognized feature gate: Example Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112265 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112273 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112281 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112288 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112295 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112302 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112311 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112318 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112328 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112336 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112344 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112351 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112358 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112365 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112372 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112379 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112388 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112395 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112402 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112410 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112418 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112426 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112433 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112440 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112447 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112456 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112465 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112475 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112485 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112520 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112529 5114 feature_gate.go:328] unrecognized feature gate: Example2 Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112539 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112549 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112556 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112563 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112570 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112577 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112586 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112596 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112605 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112614 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112623 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112662 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112672 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112682 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112691 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112700 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112708 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112716 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112743 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112772 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112782 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.112799 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.112830 5114 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.135939 5114 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.135996 5114 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136113 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136135 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136144 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136154 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136164 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136173 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136181 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136189 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136197 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136205 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136212 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136219 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136228 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136238 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136279 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136291 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136300 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136309 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136316 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136323 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136330 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136338 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136346 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136353 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136361 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136371 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136379 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136386 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136393 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136400 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136408 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136415 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136423 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136430 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136437 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136445 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136452 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136459 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136467 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136474 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136481 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136488 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136499 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136506 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136514 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136521 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136530 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136538 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136546 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136555 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136562 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136571 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136578 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136586 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136594 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136601 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136608 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136615 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136625 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136666 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136675 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136684 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136693 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136703 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136716 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136729 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136739 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136749 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136758 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136767 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136776 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136786 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136796 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136804 5114 feature_gate.go:328] unrecognized feature gate: Example Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136814 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136824 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136835 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136844 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136854 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136863 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136872 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136881 5114 feature_gate.go:328] unrecognized feature gate: Example2 Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136890 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136898 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136906 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.136913 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.136926 5114 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137145 5114 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137160 5114 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137168 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137176 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137185 5114 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137193 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137201 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137208 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137217 5114 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137225 5114 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137233 5114 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137240 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137282 5114 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137293 5114 feature_gate.go:328] unrecognized feature gate: OVNObservability Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137302 5114 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137311 5114 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137318 5114 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137325 5114 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137333 5114 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137340 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137348 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137356 5114 feature_gate.go:328] unrecognized feature gate: Example Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137363 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137371 5114 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137378 5114 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137385 5114 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137393 5114 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137401 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137410 5114 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137420 5114 feature_gate.go:328] unrecognized feature gate: PinnedImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137428 5114 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137435 5114 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137444 5114 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137451 5114 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137458 5114 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137465 5114 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137473 5114 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137482 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137489 5114 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137496 5114 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137504 5114 feature_gate.go:328] unrecognized feature gate: Example2 Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137512 5114 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137519 5114 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137527 5114 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137534 5114 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137541 5114 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137548 5114 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137556 5114 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137563 5114 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137570 5114 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137577 5114 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137585 5114 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137593 5114 feature_gate.go:328] unrecognized feature gate: DualReplica Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137600 5114 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137607 5114 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137614 5114 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137622 5114 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137630 5114 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137637 5114 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137644 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137653 5114 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137661 5114 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137669 5114 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137677 5114 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137684 5114 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137693 5114 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137701 5114 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137708 5114 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137716 5114 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137723 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137732 5114 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137739 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137747 5114 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137754 5114 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137761 5114 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137769 5114 feature_gate.go:328] unrecognized feature gate: NewOLM Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137776 5114 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137784 5114 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137791 5114 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137798 5114 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137806 5114 feature_gate.go:328] unrecognized feature gate: InsightsConfig Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137813 5114 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137820 5114 feature_gate.go:328] unrecognized feature gate: SignatureStores Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137828 5114 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137836 5114 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Feb 16 00:08:45 crc kubenswrapper[5114]: W0216 00:08:45.137843 5114 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.137856 5114 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.142077 5114 server.go:962] "Client rotation is on, will bootstrap in background" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.151161 5114 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.161333 5114 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.161437 5114 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.164131 5114 server.go:1019] "Starting client certificate rotation" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.164274 5114 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.165616 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.243444 5114 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.248492 5114 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.252934 5114 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.295378 5114 log.go:25] "Validated CRI v1 runtime API" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.476842 5114 log.go:25] "Validated CRI v1 image API" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.485138 5114 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.504563 5114 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-02-16-00-02-15-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.504680 5114 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.539314 5114 manager.go:217] Machine: {Timestamp:2026-02-16 00:08:45.534927041 +0000 UTC m=+1.916203949 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:22e33d55-d1b2-40e6-8445-92fd0fd602a7 BootID:97e4fb25-1ecb-4aec-afc8-32d47170a2de Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:93:14:d8 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:93:14:d8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a9:c5:cb Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:fc:2d:f1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:98:a4:60 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:ed:e7:22 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:85:20:75:d0:96 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:36:59:07:a7:b7:a9 Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.539824 5114 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.540165 5114 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.543198 5114 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.543292 5114 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.543639 5114 topology_manager.go:138] "Creating topology manager with none policy" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.543660 5114 container_manager_linux.go:306] "Creating device plugin manager" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.543705 5114 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.549184 5114 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.550829 5114 state_mem.go:36] "Initialized new in-memory state store" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.551160 5114 server.go:1267] "Using root directory" path="/var/lib/kubelet" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.588920 5114 kubelet.go:491] "Attempting to sync node with API server" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.589024 5114 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.589079 5114 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.589117 5114 kubelet.go:397] "Adding apiserver pod source" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.589164 5114 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.595855 5114 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.595896 5114 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.600223 5114 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.600299 5114 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.601018 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.601087 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.614392 5114 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.614782 5114 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.616410 5114 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623693 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623727 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623745 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623754 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623764 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623774 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623784 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623794 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623807 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623830 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.623844 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.631406 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.634342 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.634387 5114 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.636495 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.703632 5114 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.703745 5114 server.go:1295] "Started kubelet" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.704027 5114 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.704138 5114 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.704336 5114 server_v1.go:47] "podresources" method="list" useActivePods=true Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.705335 5114 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 16 00:08:45 crc systemd[1]: Started Kubernetes Kubelet. Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.707605 5114 server.go:317] "Adding debug handlers to kubelet server" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.709938 5114 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.710339 5114 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.711777 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="200ms" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.712001 5114 volume_manager.go:295] "The desired_state_of_world populator starts" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.712035 5114 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.712059 5114 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.712175 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.709273 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894917e708d9756 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.703681878 +0000 UTC m=+2.084958726,LastTimestamp:2026-02-16 00:08:45.703681878 +0000 UTC m=+2.084958726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.713061 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.714817 5114 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.715317 5114 factory.go:55] Registering systemd factory Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.715331 5114 factory.go:223] Registration of the systemd container factory successfully Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.716080 5114 factory.go:153] Registering CRI-O factory Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.716100 5114 factory.go:223] Registration of the crio container factory successfully Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.716131 5114 factory.go:103] Registering Raw factory Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.716148 5114 manager.go:1196] Started watching for new ooms in manager Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.716810 5114 manager.go:319] Starting recovery of all containers Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.767287 5114 manager.go:324] Recovery completed Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.787898 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788002 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788017 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788028 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788041 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788054 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788067 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788079 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788096 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788111 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788123 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788138 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788151 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788166 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788182 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788199 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788212 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788224 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788236 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788282 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788301 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788316 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788331 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788342 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788355 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788368 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788381 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788394 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788410 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788445 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788459 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788473 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788487 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788498 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788512 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788524 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788536 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788548 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788560 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788574 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788590 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788636 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788650 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788662 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788675 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788689 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788702 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788718 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788732 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788747 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788762 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788777 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788793 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788809 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788824 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788839 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788860 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788874 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788887 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788901 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788917 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788931 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788945 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788959 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788973 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.788990 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789003 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789015 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789028 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789043 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789055 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789069 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789083 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789095 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789108 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789122 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789137 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789150 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789164 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789177 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789191 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789204 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789218 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789231 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789263 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789278 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789291 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789304 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789316 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789329 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789341 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789356 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789370 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789383 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789394 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789405 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789417 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789429 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789442 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.789453 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.791798 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.793842 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.793877 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.793889 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795610 5114 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795656 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795676 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795690 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795701 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795714 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795730 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795745 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795763 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795778 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795783 5114 cpu_manager.go:222] "Starting CPU manager" policy="none" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795793 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795811 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795825 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795860 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795876 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795795 5114 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795889 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795911 5114 state_mem.go:36] "Initialized new in-memory state store" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795921 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795936 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795950 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795963 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795977 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.795992 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796005 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796017 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796029 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796044 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796059 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796074 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796087 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796098 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796108 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796120 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796131 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796144 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796156 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796168 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796179 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796191 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796203 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796215 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796228 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796240 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796272 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796285 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796323 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796338 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796364 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796377 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796391 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796406 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796420 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796434 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796449 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796462 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796473 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796486 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796497 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796509 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796521 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796534 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796547 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796560 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796572 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796584 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796621 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796634 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796647 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796660 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796672 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796684 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796698 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796712 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796726 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796737 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796751 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796763 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796776 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796789 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796801 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796812 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796824 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796836 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796849 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796860 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796872 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796884 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796896 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796908 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796920 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796932 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796945 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796958 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796970 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796982 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.796994 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797008 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797020 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797032 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797056 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797069 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797080 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797092 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797112 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797129 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797142 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797161 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797173 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797185 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797204 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797217 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797231 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797261 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797279 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797294 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797309 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797321 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797333 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797346 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797358 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797369 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797382 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797393 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797405 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797422 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797434 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797471 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797484 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797495 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797507 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797519 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797531 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797543 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797555 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797567 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797579 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797590 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797602 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797619 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797631 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797643 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797655 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797666 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797676 5114 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797686 5114 reconstruct.go:97] "Volume reconstruction finished" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.797693 5114 reconciler.go:26] "Reconciler: start to sync state" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.800715 5114 policy_none.go:49] "None policy: Start" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.800737 5114 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.800752 5114 state_mem.go:35] "Initializing new in-memory state store" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.813221 5114 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.813639 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.815390 5114 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.815447 5114 status_manager.go:230] "Starting to sync pod status with apiserver" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.815490 5114 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.815507 5114 kubelet.go:2451] "Starting kubelet main sync loop" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.815637 5114 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.816447 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.855942 5114 manager.go:341] "Starting Device Plugin manager" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.856306 5114 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.856342 5114 server.go:85] "Starting device plugin registration server" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.856978 5114 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.857032 5114 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.857270 5114 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.857491 5114 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.857548 5114 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.863949 5114 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.864056 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.912771 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="400ms" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.915949 5114 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.916256 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.917524 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.917573 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.917594 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.920066 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.920743 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.920807 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.924567 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.924635 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.924651 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.924890 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.924928 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.924945 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.925879 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.925971 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.926014 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.926695 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.926737 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.926750 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.926832 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.926874 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.926887 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.927553 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.927757 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.927825 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.928735 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.928754 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.928787 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.928803 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.928819 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.928807 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.929528 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.929606 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.929635 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.930305 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.930341 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.930353 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.930400 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.930431 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.930441 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.931218 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.931270 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.931725 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.931753 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.931762 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.957461 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.957533 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.958496 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.958533 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.958542 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:45 crc kubenswrapper[5114]: I0216 00:08:45.958564 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.959351 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.963199 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.981070 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:45 crc kubenswrapper[5114]: E0216 00:08:45.999438 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:46 crc kubenswrapper[5114]: E0216 00:08:46.003863 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.101583 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.101681 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.101750 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.101916 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.101975 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102205 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102232 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102283 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102315 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102344 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102368 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102393 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102429 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102458 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102483 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102506 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102526 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102551 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102574 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102598 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102619 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102646 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102672 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.102895 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.103719 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.104266 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.104230 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.104911 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.105919 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.108858 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.160394 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.161814 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.161900 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.161922 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.161969 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:08:46 crc kubenswrapper[5114]: E0216 00:08:46.162776 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204523 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204688 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204712 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204753 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204860 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204867 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204876 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204972 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.204990 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205067 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205093 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205101 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205113 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205166 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205186 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205195 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205219 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205288 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205296 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205301 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205362 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205368 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205297 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205415 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205455 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205486 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205513 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205518 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205544 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205571 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205582 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.205546 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.259315 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.266401 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.282904 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.300825 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.304522 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:08:46 crc kubenswrapper[5114]: E0216 00:08:46.314181 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="800ms" Feb 16 00:08:46 crc kubenswrapper[5114]: W0216 00:08:46.387242 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-5ba75b22c2553b0e87ae46ee68caca18ccc1c6e7d49eb5ef1b584270279575f3 WatchSource:0}: Error finding container 5ba75b22c2553b0e87ae46ee68caca18ccc1c6e7d49eb5ef1b584270279575f3: Status 404 returned error can't find the container with id 5ba75b22c2553b0e87ae46ee68caca18ccc1c6e7d49eb5ef1b584270279575f3 Feb 16 00:08:46 crc kubenswrapper[5114]: W0216 00:08:46.389161 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-7fd9f688b4cc6df61e66ab93808b6948c013c7bb0a5f7bb5523c02964bec663d WatchSource:0}: Error finding container 7fd9f688b4cc6df61e66ab93808b6948c013c7bb0a5f7bb5523c02964bec663d: Status 404 returned error can't find the container with id 7fd9f688b4cc6df61e66ab93808b6948c013c7bb0a5f7bb5523c02964bec663d Feb 16 00:08:46 crc kubenswrapper[5114]: W0216 00:08:46.393931 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-ac1749cb755c20f0a006e466933f713519d5f9c45688aaf3ce4b38ba08de9071 WatchSource:0}: Error finding container ac1749cb755c20f0a006e466933f713519d5f9c45688aaf3ce4b38ba08de9071: Status 404 returned error can't find the container with id ac1749cb755c20f0a006e466933f713519d5f9c45688aaf3ce4b38ba08de9071 Feb 16 00:08:46 crc kubenswrapper[5114]: W0216 00:08:46.394387 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-67c937e2f4a855ee348213aca1745303c029ca29f78eb4965a065564394f880e WatchSource:0}: Error finding container 67c937e2f4a855ee348213aca1745303c029ca29f78eb4965a065564394f880e: Status 404 returned error can't find the container with id 67c937e2f4a855ee348213aca1745303c029ca29f78eb4965a065564394f880e Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.394976 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 00:08:46 crc kubenswrapper[5114]: W0216 00:08:46.395431 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-e6bb05c792e75780f5fe92c50049ccdf5d9bafd2a6bd0ec434a581ce055756f9 WatchSource:0}: Error finding container e6bb05c792e75780f5fe92c50049ccdf5d9bafd2a6bd0ec434a581ce055756f9: Status 404 returned error can't find the container with id e6bb05c792e75780f5fe92c50049ccdf5d9bafd2a6bd0ec434a581ce055756f9 Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.563404 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.564415 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.564444 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.564467 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.564490 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:08:46 crc kubenswrapper[5114]: E0216 00:08:46.564910 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.637693 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 16 00:08:46 crc kubenswrapper[5114]: E0216 00:08:46.707534 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 16 00:08:46 crc kubenswrapper[5114]: E0216 00:08:46.769160 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.823566 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"e6bb05c792e75780f5fe92c50049ccdf5d9bafd2a6bd0ec434a581ce055756f9"} Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.824884 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"67c937e2f4a855ee348213aca1745303c029ca29f78eb4965a065564394f880e"} Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.826012 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ac1749cb755c20f0a006e466933f713519d5f9c45688aaf3ce4b38ba08de9071"} Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.827209 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7fd9f688b4cc6df61e66ab93808b6948c013c7bb0a5f7bb5523c02964bec663d"} Feb 16 00:08:46 crc kubenswrapper[5114]: I0216 00:08:46.828216 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5ba75b22c2553b0e87ae46ee68caca18ccc1c6e7d49eb5ef1b584270279575f3"} Feb 16 00:08:46 crc kubenswrapper[5114]: E0216 00:08:46.893479 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 16 00:08:46 crc kubenswrapper[5114]: E0216 00:08:46.927820 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 16 00:08:47 crc kubenswrapper[5114]: E0216 00:08:47.115610 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="1.6s" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.294390 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 16 00:08:47 crc kubenswrapper[5114]: E0216 00:08:47.295782 5114 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.365934 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.366824 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.366864 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.366876 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.366899 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:08:47 crc kubenswrapper[5114]: E0216 00:08:47.367490 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 16 00:08:47 crc kubenswrapper[5114]: E0216 00:08:47.545067 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1894917e708d9756 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.703681878 +0000 UTC m=+2.084958726,LastTimestamp:2026-02-16 00:08:45.703681878 +0000 UTC m=+2.084958726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.637887 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.833492 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"cbb2f8f39b9f3bee939bb471570744d580cfdb439c253b8460cacbfda0adfbf4"} Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.833538 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda"} Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.835339 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889" exitCode=0 Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.835427 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889"} Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.835657 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.837020 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.837071 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.837086 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:47 crc kubenswrapper[5114]: E0216 00:08:47.837459 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.837690 5114 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c" exitCode=0 Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.837762 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c"} Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.838013 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.839022 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.839061 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.839073 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:47 crc kubenswrapper[5114]: E0216 00:08:47.839338 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.839492 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.840470 5114 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b" exitCode=0 Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.840558 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b"} Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.840721 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.840985 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.841012 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.841026 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.841366 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.841390 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.841510 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:47 crc kubenswrapper[5114]: E0216 00:08:47.841664 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.842665 5114 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25" exitCode=0 Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.842777 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.842789 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25"} Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.843367 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.843405 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:47 crc kubenswrapper[5114]: I0216 00:08:47.843425 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:47 crc kubenswrapper[5114]: E0216 00:08:47.843670 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.555573 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.640549 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.716286 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="3.2s" Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.835100 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.852103 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"5dbac4f55a4e2c2f3e9685aef58c61e28ac3f768691715b8218f6a5c80dd6d81"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.852170 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ebf9c3d019e33707c276dab2a0fc3eded08e87049610ece88fb23aebc8fe70fd"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.852281 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.853484 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.853513 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.853629 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.854076 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.858083 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.858108 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.858118 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.860098 5114 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb" exitCode=0 Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.860168 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.860278 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.861408 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.861432 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.861444 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.861616 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.869257 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"a6f1dde85e03a42b4451963a332e5b67b46f9f2e20df9ff9d84072649ce88c2c"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.869298 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"3bf4f94ba97d4ae528d0ebb96d364672d87f90e197fea356ea55ca938edadcd0"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.869309 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"288e2fbc2214d418ac3020d245ad8aaf063f8e63b8fb410077b4f83c7b0e8887"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.869341 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.869757 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.869784 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.869796 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.869971 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.870944 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"194a5bb705405e17e124fa501a1108736f68e3acb7d24b8735925b360887f0a7"} Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.871057 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.871492 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.871519 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.871530 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.871675 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.968459 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.969474 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.969517 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.969530 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:48 crc kubenswrapper[5114]: I0216 00:08:48.969556 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.969958 5114 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.233:6443: connect: connection refused" node="crc" Feb 16 00:08:48 crc kubenswrapper[5114]: E0216 00:08:48.995931 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.429336 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.637473 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 16 00:08:49 crc kubenswrapper[5114]: E0216 00:08:49.687926 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.233:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.877653 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"5484805ff3a94ca4034b8ad5ab4faaf70ca7648c1098bd73bfba6861c2a25bf4"} Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.877715 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e"} Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.877917 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.878993 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.879037 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.879049 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:49 crc kubenswrapper[5114]: E0216 00:08:49.879309 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.881206 5114 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493" exitCode=0 Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.881425 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.881550 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.881804 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.881853 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493"} Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.881425 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882277 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882302 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882312 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882331 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882366 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882380 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882404 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882454 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.882476 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:49 crc kubenswrapper[5114]: E0216 00:08:49.882623 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:49 crc kubenswrapper[5114]: E0216 00:08:49.883329 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:49 crc kubenswrapper[5114]: E0216 00:08:49.884208 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.888374 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.888425 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:49 crc kubenswrapper[5114]: I0216 00:08:49.888467 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:49 crc kubenswrapper[5114]: E0216 00:08:49.888685 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.615142 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.638073 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.233:6443: connect: connection refused Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891172 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"58ea7cf355069731d736ded1f9a033e00b7f747f4a993b9d00516ab40c56d783"} Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891229 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"b10c64884bbd71e2157b1670c58209bda6bd063665c1ac3d058e91ad3a7fc7de"} Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891259 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"cc05bbf6d8b5e02515a1cbcd8639ce40b8118b0262ad8073c708dfa30ba9a54d"} Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891355 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891368 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891531 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891662 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891961 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891987 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.891997 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.892205 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.892290 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.892314 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:50 crc kubenswrapper[5114]: E0216 00:08:50.892323 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.892883 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.892911 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.892921 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:50 crc kubenswrapper[5114]: E0216 00:08:50.893077 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:50 crc kubenswrapper[5114]: E0216 00:08:50.893191 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.940529 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:50 crc kubenswrapper[5114]: I0216 00:08:50.974108 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.357335 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.514438 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.801027 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.901119 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"33765468880ba21c7b0362a460e75d6e28decbeb2daa74e65202f1e4ac174738"} Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.901215 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"05b2d05490e4cfff0b22711d5a8c00f6728fa0e633a8b993400a629d4424fb55"} Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.901292 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.901388 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.901432 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.901308 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.902159 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.902190 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.902202 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.902449 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.902496 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:51 crc kubenswrapper[5114]: E0216 00:08:51.902502 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.902515 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:51 crc kubenswrapper[5114]: E0216 00:08:51.903279 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.904291 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.904344 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:51 crc kubenswrapper[5114]: I0216 00:08:51.904436 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:51 crc kubenswrapper[5114]: E0216 00:08:51.904934 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.170597 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.172227 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.172350 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.172371 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.172411 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.321895 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.904582 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.904890 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.904927 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.905521 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.905585 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.905611 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.905949 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.906032 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.906054 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:52 crc kubenswrapper[5114]: E0216 00:08:52.906425 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:52 crc kubenswrapper[5114]: E0216 00:08:52.906886 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.906962 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.907011 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:52 crc kubenswrapper[5114]: I0216 00:08:52.907031 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:52 crc kubenswrapper[5114]: E0216 00:08:52.907628 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.615539 5114 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.615678 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.785948 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.908150 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.908206 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.909459 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.909550 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.909580 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.910067 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.910137 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:53 crc kubenswrapper[5114]: I0216 00:08:53.910165 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:53 crc kubenswrapper[5114]: E0216 00:08:53.910362 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:53 crc kubenswrapper[5114]: E0216 00:08:53.911037 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:08:55 crc kubenswrapper[5114]: E0216 00:08:55.864448 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:08:56 crc kubenswrapper[5114]: I0216 00:08:56.257155 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Feb 16 00:08:56 crc kubenswrapper[5114]: I0216 00:08:56.257534 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:08:56 crc kubenswrapper[5114]: I0216 00:08:56.259052 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:08:56 crc kubenswrapper[5114]: I0216 00:08:56.259090 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:08:56 crc kubenswrapper[5114]: I0216 00:08:56.259098 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:08:56 crc kubenswrapper[5114]: E0216 00:08:56.260394 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:01 crc kubenswrapper[5114]: E0216 00:09:01.360414 5114 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Feb 16 00:09:01 crc kubenswrapper[5114]: I0216 00:09:01.514554 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Feb 16 00:09:01 crc kubenswrapper[5114]: I0216 00:09:01.514693 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Feb 16 00:09:01 crc kubenswrapper[5114]: I0216 00:09:01.639978 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 16 00:09:01 crc kubenswrapper[5114]: I0216 00:09:01.741609 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 00:09:01 crc kubenswrapper[5114]: I0216 00:09:01.741721 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 16 00:09:01 crc kubenswrapper[5114]: E0216 00:09:01.917676 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.616356 5114 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.616485 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.824662 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.824906 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.825870 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.825952 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.825979 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:03 crc kubenswrapper[5114]: E0216 00:09:03.826933 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.847805 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.915363 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.915732 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.916999 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.917101 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.917127 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:03 crc kubenswrapper[5114]: E0216 00:09:03.917817 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.936727 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.937801 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.937907 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:03 crc kubenswrapper[5114]: I0216 00:09:03.937942 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:03 crc kubenswrapper[5114]: E0216 00:09:03.938795 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:05 crc kubenswrapper[5114]: E0216 00:09:05.864643 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.519753 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.520020 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.520822 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.520867 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.520881 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.521187 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.524051 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.735284 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e708d9756 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.703681878 +0000 UTC m=+2.084958726,LastTimestamp:2026-02-16 00:08:45.703681878 +0000 UTC m=+2.084958726,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.736425 5114 trace.go:236] Trace[1708779385]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 00:08:54.678) (total time: 12057ms): Feb 16 00:09:06 crc kubenswrapper[5114]: Trace[1708779385]: ---"Objects listed" error:services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope 12057ms (00:09:06.736) Feb 16 00:09:06 crc kubenswrapper[5114]: Trace[1708779385]: [12.057325316s] [12.057325316s] END Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.736499 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.736603 5114 trace.go:236] Trace[417628546]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 00:08:55.138) (total time: 11598ms): Feb 16 00:09:06 crc kubenswrapper[5114]: Trace[417628546]: ---"Objects listed" error:nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope 11598ms (00:09:06.736) Feb 16 00:09:06 crc kubenswrapper[5114]: Trace[417628546]: [11.598531561s] [11.598531561s] END Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.736650 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.736768 5114 trace.go:236] Trace[1122496835]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 00:08:53.321) (total time: 13415ms): Feb 16 00:09:06 crc kubenswrapper[5114]: Trace[1122496835]: ---"Objects listed" error:runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope 13415ms (00:09:06.736) Feb 16 00:09:06 crc kubenswrapper[5114]: Trace[1122496835]: [13.415369771s] [13.415369771s] END Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.736804 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.736878 5114 trace.go:236] Trace[1713590748]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (16-Feb-2026 00:08:53.268) (total time: 13467ms): Feb 16 00:09:06 crc kubenswrapper[5114]: Trace[1713590748]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 13467ms (00:09:06.736) Feb 16 00:09:06 crc kubenswrapper[5114]: Trace[1713590748]: [13.467979472s] [13.467979472s] END Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.736897 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.741078 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.741068 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edabd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,LastTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.744226 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edf6a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,LastTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.746483 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75ee2419 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793895449 +0000 UTC m=+2.175172267,LastTimestamp:2026-02-16 00:08:45.793895449 +0000 UTC m=+2.175172267,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.752335 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e79e19aa0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.860182688 +0000 UTC m=+2.241459506,LastTimestamp:2026-02-16 00:08:45.860182688 +0000 UTC m=+2.241459506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.754724 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edabd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edabd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,LastTimestamp:2026-02-16 00:08:45.917552498 +0000 UTC m=+2.298829316,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.762731 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edf6a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edf6a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,LastTimestamp:2026-02-16 00:08:45.917584288 +0000 UTC m=+2.298861106,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.770874 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41932->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.770897 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50446->192.168.126.11:17697: read: connection reset by peer" start-of-body= Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.770980 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41932->192.168.126.11:17697: read: connection reset by peer" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.771034 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50446->192.168.126.11:17697: read: connection reset by peer" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.773549 5114 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.773615 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.773873 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75ee2419\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75ee2419 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793895449 +0000 UTC m=+2.175172267,LastTimestamp:2026-02-16 00:08:45.917601308 +0000 UTC m=+2.298878126,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.782655 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edabd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edabd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,LastTimestamp:2026-02-16 00:08:45.924625045 +0000 UTC m=+2.305901873,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.791659 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edf6a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edf6a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,LastTimestamp:2026-02-16 00:08:45.924645066 +0000 UTC m=+2.305921884,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.797037 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75ee2419\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75ee2419 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793895449 +0000 UTC m=+2.175172267,LastTimestamp:2026-02-16 00:08:45.924657376 +0000 UTC m=+2.305934194,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.802590 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edabd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edabd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,LastTimestamp:2026-02-16 00:08:45.924919859 +0000 UTC m=+2.306196677,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.811713 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edf6a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edf6a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,LastTimestamp:2026-02-16 00:08:45.924938239 +0000 UTC m=+2.306215057,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.817033 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75ee2419\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75ee2419 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793895449 +0000 UTC m=+2.175172267,LastTimestamp:2026-02-16 00:08:45.924952619 +0000 UTC m=+2.306229437,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.821392 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edabd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edabd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,LastTimestamp:2026-02-16 00:08:45.926718458 +0000 UTC m=+2.307995276,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.825681 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edf6a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edf6a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,LastTimestamp:2026-02-16 00:08:45.926744609 +0000 UTC m=+2.308021427,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.831597 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75ee2419\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75ee2419 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793895449 +0000 UTC m=+2.175172267,LastTimestamp:2026-02-16 00:08:45.926757159 +0000 UTC m=+2.308033977,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.837057 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edabd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edabd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,LastTimestamp:2026-02-16 00:08:45.9268556 +0000 UTC m=+2.308132418,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.844448 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edf6a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edf6a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,LastTimestamp:2026-02-16 00:08:45.92688064 +0000 UTC m=+2.308157458,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.850317 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75ee2419\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75ee2419 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793895449 +0000 UTC m=+2.175172267,LastTimestamp:2026-02-16 00:08:45.92689278 +0000 UTC m=+2.308169598,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.856615 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edabd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edabd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,LastTimestamp:2026-02-16 00:08:45.928768801 +0000 UTC m=+2.310045619,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.862506 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edabd3\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edabd3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793864659 +0000 UTC m=+2.175141477,LastTimestamp:2026-02-16 00:08:45.928755231 +0000 UTC m=+2.310032049,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.868164 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edf6a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edf6a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,LastTimestamp:2026-02-16 00:08:45.928800391 +0000 UTC m=+2.310077209,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.872843 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75edf6a1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75edf6a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793883809 +0000 UTC m=+2.175160627,LastTimestamp:2026-02-16 00:08:45.928812421 +0000 UTC m=+2.310089239,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.877568 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.1894917e75ee2419\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.1894917e75ee2419 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:45.793895449 +0000 UTC m=+2.175172267,LastTimestamp:2026-02-16 00:08:45.928823952 +0000 UTC m=+2.310100770,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.884005 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917e99c9425a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:46.395458138 +0000 UTC m=+2.776734956,LastTimestamp:2026-02-16 00:08:46.395458138 +0000 UTC m=+2.776734956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.888466 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917e99ca70f8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:46.395535608 +0000 UTC m=+2.776812416,LastTimestamp:2026-02-16 00:08:46.395535608 +0000 UTC m=+2.776812416,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.893223 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917e99d4060b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:46.396163595 +0000 UTC m=+2.777440413,LastTimestamp:2026-02-16 00:08:46.396163595 +0000 UTC m=+2.777440413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.898806 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917e99fe315c openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:46.398927196 +0000 UTC m=+2.780204014,LastTimestamp:2026-02-16 00:08:46.398927196 +0000 UTC m=+2.780204014,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.902923 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894917e9a0af4f9 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:46.399763705 +0000 UTC m=+2.781040523,LastTimestamp:2026-02-16 00:08:46.399763705 +0000 UTC m=+2.781040523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.907625 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917ec978e012 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.195496466 +0000 UTC m=+3.576773274,LastTimestamp:2026-02-16 00:08:47.195496466 +0000 UTC m=+3.576773274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.912384 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917ec97a5ae7 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.195593447 +0000 UTC m=+3.576870265,LastTimestamp:2026-02-16 00:08:47.195593447 +0000 UTC m=+3.576870265,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.919429 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917ec999ee96 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.19766287 +0000 UTC m=+3.578939688,LastTimestamp:2026-02-16 00:08:47.19766287 +0000 UTC m=+3.578939688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.925836 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917ec9bda4fe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.200003326 +0000 UTC m=+3.581280134,LastTimestamp:2026-02-16 00:08:47.200003326 +0000 UTC m=+3.581280134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.932886 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894917ec9c6b0ae openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.200596142 +0000 UTC m=+3.581872960,LastTimestamp:2026-02-16 00:08:47.200596142 +0000 UTC m=+3.581872960,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.937879 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917eca80d87a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.212796026 +0000 UTC m=+3.594072844,LastTimestamp:2026-02-16 00:08:47.212796026 +0000 UTC m=+3.594072844,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.942221 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917eca9b8de5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.214546405 +0000 UTC m=+3.595823223,LastTimestamp:2026-02-16 00:08:47.214546405 +0000 UTC m=+3.595823223,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.949394 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.950820 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917eca9e71bf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.214735807 +0000 UTC m=+3.596012625,LastTimestamp:2026-02-16 00:08:47.214735807 +0000 UTC m=+3.596012625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.953057 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="5484805ff3a94ca4034b8ad5ab4faaf70ca7648c1098bd73bfba6861c2a25bf4" exitCode=255 Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.953176 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"5484805ff3a94ca4034b8ad5ab4faaf70ca7648c1098bd73bfba6861c2a25bf4"} Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.953446 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.955062 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.955185 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.955279 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.955723 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:06 crc kubenswrapper[5114]: I0216 00:09:06.956102 5114 scope.go:117] "RemoveContainer" containerID="5484805ff3a94ca4034b8ad5ab4faaf70ca7648c1098bd73bfba6861c2a25bf4" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.955651 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917ecaa11b1f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.214910239 +0000 UTC m=+3.596187057,LastTimestamp:2026-02-16 00:08:47.214910239 +0000 UTC m=+3.596187057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.960923 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917ecab5c1be openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.216263614 +0000 UTC m=+3.597540432,LastTimestamp:2026-02-16 00:08:47.216263614 +0000 UTC m=+3.597540432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.964818 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894917ecad7bdeb openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.218490859 +0000 UTC m=+3.599767677,LastTimestamp:2026-02-16 00:08:47.218490859 +0000 UTC m=+3.599767677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.970082 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917edefc2c4a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.55642273 +0000 UTC m=+3.937699578,LastTimestamp:2026-02-16 00:08:47.55642273 +0000 UTC m=+3.937699578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.978430 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917ee0076e26 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.573937702 +0000 UTC m=+3.955214530,LastTimestamp:2026-02-16 00:08:47.573937702 +0000 UTC m=+3.955214530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.985980 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917ee01f36bb openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.575496379 +0000 UTC m=+3.956773207,LastTimestamp:2026-02-16 00:08:47.575496379 +0000 UTC m=+3.956773207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:06 crc kubenswrapper[5114]: E0216 00:09:06.992731 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917eefd4f281 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.839064705 +0000 UTC m=+4.220341533,LastTimestamp:2026-02-16 00:08:47.839064705 +0000 UTC m=+4.220341533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.004008 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917eefe9f6c2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.84044205 +0000 UTC m=+4.221718868,LastTimestamp:2026-02-16 00:08:47.84044205 +0000 UTC m=+4.221718868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.010955 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917ef02012dd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.843988189 +0000 UTC m=+4.225265007,LastTimestamp:2026-02-16 00:08:47.843988189 +0000 UTC m=+4.225265007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.026120 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894917ef029a084 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:47.844614276 +0000 UTC m=+4.225891104,LastTimestamp:2026-02-16 00:08:47.844614276 +0000 UTC m=+4.225891104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.051305 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894917f026b6f6b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.150916971 +0000 UTC m=+4.532193789,LastTimestamp:2026-02-16 00:08:48.150916971 +0000 UTC m=+4.532193789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.065686 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f02c6c29e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.156902046 +0000 UTC m=+4.538178864,LastTimestamp:2026-02-16 00:08:48.156902046 +0000 UTC m=+4.538178864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.072407 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917f02e83bda openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.15909577 +0000 UTC m=+4.540372588,LastTimestamp:2026-02-16 00:08:48.15909577 +0000 UTC m=+4.540372588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.080310 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.1894917f0414f243 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.178803267 +0000 UTC m=+4.560080085,LastTimestamp:2026-02-16 00:08:48.178803267 +0000 UTC m=+4.560080085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.085377 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917f04effa99 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.193157785 +0000 UTC m=+4.574434613,LastTimestamp:2026-02-16 00:08:48.193157785 +0000 UTC m=+4.574434613,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.092510 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917f05010ad9 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.194276057 +0000 UTC m=+4.575552885,LastTimestamp:2026-02-16 00:08:48.194276057 +0000 UTC m=+4.575552885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.098133 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f0546b629 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.198841897 +0000 UTC m=+4.580118715,LastTimestamp:2026-02-16 00:08:48.198841897 +0000 UTC m=+4.580118715,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.106901 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f0585afc8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.202969032 +0000 UTC m=+4.584245850,LastTimestamp:2026-02-16 00:08:48.202969032 +0000 UTC m=+4.584245850,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.118988 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f05b6d522 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.206189858 +0000 UTC m=+4.587466676,LastTimestamp:2026-02-16 00:08:48.206189858 +0000 UTC m=+4.587466676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.140138 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f08045e52 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.244825682 +0000 UTC m=+4.626102500,LastTimestamp:2026-02-16 00:08:48.244825682 +0000 UTC m=+4.626102500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.146482 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917f144b735b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.450810715 +0000 UTC m=+4.832087533,LastTimestamp:2026-02-16 00:08:48.450810715 +0000 UTC m=+4.832087533,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.153077 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917f15f39ae4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.4786081 +0000 UTC m=+4.859884908,LastTimestamp:2026-02-16 00:08:48.4786081 +0000 UTC m=+4.859884908,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.164859 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917f1608e2ec openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.480002796 +0000 UTC m=+4.861279614,LastTimestamp:2026-02-16 00:08:48.480002796 +0000 UTC m=+4.861279614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.172980 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f16620cea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.48584625 +0000 UTC m=+4.867123068,LastTimestamp:2026-02-16 00:08:48.48584625 +0000 UTC m=+4.867123068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.178198 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917f1668c4e5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.486286565 +0000 UTC m=+4.867563393,LastTimestamp:2026-02-16 00:08:48.486286565 +0000 UTC m=+4.867563393,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.183443 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917f17cf3a3d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.509778493 +0000 UTC m=+4.891055301,LastTimestamp:2026-02-16 00:08:48.509778493 +0000 UTC m=+4.891055301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.187671 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f17e0323b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.510890555 +0000 UTC m=+4.892167373,LastTimestamp:2026-02-16 00:08:48.510890555 +0000 UTC m=+4.892167373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.195702 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917f17e3a7c9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.511117257 +0000 UTC m=+4.892394075,LastTimestamp:2026-02-16 00:08:48.511117257 +0000 UTC m=+4.892394075,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.205549 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f181984c4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.514647236 +0000 UTC m=+4.895924054,LastTimestamp:2026-02-16 00:08:48.514647236 +0000 UTC m=+4.895924054,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.216146 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917f241b547b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.716092539 +0000 UTC m=+5.097369357,LastTimestamp:2026-02-16 00:08:48.716092539 +0000 UTC m=+5.097369357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.221615 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917f24fdef14 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.730943252 +0000 UTC m=+5.112220070,LastTimestamp:2026-02-16 00:08:48.730943252 +0000 UTC m=+5.112220070,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.227804 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.1894917f257f851d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.739435805 +0000 UTC m=+5.120712623,LastTimestamp:2026-02-16 00:08:48.739435805 +0000 UTC m=+5.120712623,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.231907 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f25e91f65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.746356581 +0000 UTC m=+5.127633409,LastTimestamp:2026-02-16 00:08:48.746356581 +0000 UTC m=+5.127633409,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.237026 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894917f2630da0f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.751057423 +0000 UTC m=+5.132334241,LastTimestamp:2026-02-16 00:08:48.751057423 +0000 UTC m=+5.132334241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.241117 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f2792ffea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.774266858 +0000 UTC m=+5.155543676,LastTimestamp:2026-02-16 00:08:48.774266858 +0000 UTC m=+5.155543676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.249885 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f27ada9ed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.776014317 +0000 UTC m=+5.157291135,LastTimestamp:2026-02-16 00:08:48.776014317 +0000 UTC m=+5.157291135,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.255483 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f2d13996a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.866580842 +0000 UTC m=+5.247857660,LastTimestamp:2026-02-16 00:08:48.866580842 +0000 UTC m=+5.247857660,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.263430 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f350128f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:48.999590133 +0000 UTC m=+5.380866971,LastTimestamp:2026-02-16 00:08:48.999590133 +0000 UTC m=+5.380866971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.268588 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f36f828e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.032554725 +0000 UTC m=+5.413831553,LastTimestamp:2026-02-16 00:08:49.032554725 +0000 UTC m=+5.413831553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.274725 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f370a838a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.033757578 +0000 UTC m=+5.415034396,LastTimestamp:2026-02-16 00:08:49.033757578 +0000 UTC m=+5.415034396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.278070 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f3c9648af openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.126803631 +0000 UTC m=+5.508080459,LastTimestamp:2026-02-16 00:08:49.126803631 +0000 UTC m=+5.508080459,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.279952 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f3e3f999b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.154677147 +0000 UTC m=+5.535953975,LastTimestamp:2026-02-16 00:08:49.154677147 +0000 UTC m=+5.535953975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.283909 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f47ea54dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.316861148 +0000 UTC m=+5.698137976,LastTimestamp:2026-02-16 00:08:49.316861148 +0000 UTC m=+5.698137976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.285404 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f48e27701 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.333122817 +0000 UTC m=+5.714399635,LastTimestamp:2026-02-16 00:08:49.333122817 +0000 UTC m=+5.714399635,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.293955 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f6a18d38a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.890333578 +0000 UTC m=+6.271610396,LastTimestamp:2026-02-16 00:08:49.890333578 +0000 UTC m=+6.271610396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.298503 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f79d586b0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.154358448 +0000 UTC m=+6.535635296,LastTimestamp:2026-02-16 00:08:50.154358448 +0000 UTC m=+6.535635296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.302546 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f7b8cab56 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.183138134 +0000 UTC m=+6.564414952,LastTimestamp:2026-02-16 00:08:50.183138134 +0000 UTC m=+6.564414952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.306211 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f7b9c2b96 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.184154006 +0000 UTC m=+6.565430814,LastTimestamp:2026-02-16 00:08:50.184154006 +0000 UTC m=+6.565430814,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.310182 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f8a7c4a07 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.433722887 +0000 UTC m=+6.814999695,LastTimestamp:2026-02-16 00:08:50.433722887 +0000 UTC m=+6.814999695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.313555 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f8bda7119 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.456670489 +0000 UTC m=+6.837947307,LastTimestamp:2026-02-16 00:08:50.456670489 +0000 UTC m=+6.837947307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.318598 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f8bf49ea8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.458386088 +0000 UTC m=+6.839662906,LastTimestamp:2026-02-16 00:08:50.458386088 +0000 UTC m=+6.839662906,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.322635 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917f9f3c5cd3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.781854931 +0000 UTC m=+7.163131789,LastTimestamp:2026-02-16 00:08:50.781854931 +0000 UTC m=+7.163131789,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.327064 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917fa0c0efd3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.807320531 +0000 UTC m=+7.188597359,LastTimestamp:2026-02-16 00:08:50.807320531 +0000 UTC m=+7.188597359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.336580 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917fa0d7c44e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:50.808816718 +0000 UTC m=+7.190093556,LastTimestamp:2026-02-16 00:08:50.808816718 +0000 UTC m=+7.190093556,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.341497 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917fb5a61ef5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:51.157884661 +0000 UTC m=+7.539161509,LastTimestamp:2026-02-16 00:08:51.157884661 +0000 UTC m=+7.539161509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.345451 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917fb7016b6f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:51.180645231 +0000 UTC m=+7.561922089,LastTimestamp:2026-02-16 00:08:51.180645231 +0000 UTC m=+7.561922089,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.349728 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917fb727fb6b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:51.183172459 +0000 UTC m=+7.564449317,LastTimestamp:2026-02-16 00:08:51.183172459 +0000 UTC m=+7.564449317,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.353595 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917fc8ba28a7 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:51.477964967 +0000 UTC m=+7.859241785,LastTimestamp:2026-02-16 00:08:51.477964967 +0000 UTC m=+7.859241785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.357695 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.1894917fca92059b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:51.508888987 +0000 UTC m=+7.890165815,LastTimestamp:2026-02-16 00:08:51.508888987 +0000 UTC m=+7.890165815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.363008 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 16 00:09:07 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-controller-manager-crc.1894918048246164 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 16 00:09:07 crc kubenswrapper[5114]: body: Feb 16 00:09:07 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:53.61563274 +0000 UTC m=+9.996909568,LastTimestamp:2026-02-16 00:08:53.61563274 +0000 UTC m=+9.996909568,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 16 00:09:07 crc kubenswrapper[5114]: > Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.373825 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894918048270d4e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:53.615807822 +0000 UTC m=+9.997084650,LastTimestamp:2026-02-16 00:08:53.615807822 +0000 UTC m=+9.997084650,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.378813 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 16 00:09:07 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.189491821ef5c982 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": context deadline exceeded Feb 16 00:09:07 crc kubenswrapper[5114]: body: Feb 16 00:09:07 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:01.514647938 +0000 UTC m=+17.895924756,LastTimestamp:2026-02-16 00:09:01.514647938 +0000 UTC m=+17.895924756,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 16 00:09:07 crc kubenswrapper[5114]: > Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.382352 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189491821ef712bd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:01.514732221 +0000 UTC m=+17.896009039,LastTimestamp:2026-02-16 00:09:01.514732221 +0000 UTC m=+17.896009039,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.386964 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 16 00:09:07 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.189491822c7df0a0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 16 00:09:07 crc kubenswrapper[5114]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 16 00:09:07 crc kubenswrapper[5114]: Feb 16 00:09:07 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:01.741674656 +0000 UTC m=+18.122951494,LastTimestamp:2026-02-16 00:09:01.741674656 +0000 UTC m=+18.122951494,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 16 00:09:07 crc kubenswrapper[5114]: > Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.392280 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189491822c7f1fc2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:01.741752258 +0000 UTC m=+18.123029096,LastTimestamp:2026-02-16 00:09:01.741752258 +0000 UTC m=+18.123029096,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.395981 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1894918048246164\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 16 00:09:07 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-controller-manager-crc.1894918048246164 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 16 00:09:07 crc kubenswrapper[5114]: body: Feb 16 00:09:07 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:53.61563274 +0000 UTC m=+9.996909568,LastTimestamp:2026-02-16 00:09:03.616440312 +0000 UTC m=+19.997717170,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 16 00:09:07 crc kubenswrapper[5114]: > Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.399844 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.1894918048270d4e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.1894918048270d4e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:53.615807822 +0000 UTC m=+9.997084650,LastTimestamp:2026-02-16 00:09:03.616518624 +0000 UTC m=+19.997795472,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.405012 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 16 00:09:07 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.1894918358425b3e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:41932->192.168.126.11:17697: read: connection reset by peer Feb 16 00:09:07 crc kubenswrapper[5114]: body: Feb 16 00:09:07 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:06.77093459 +0000 UTC m=+23.152211448,LastTimestamp:2026-02-16 00:09:06.77093459 +0000 UTC m=+23.152211448,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 16 00:09:07 crc kubenswrapper[5114]: > Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.408828 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 16 00:09:07 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.189491835842f351 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:50446->192.168.126.11:17697: read: connection reset by peer Feb 16 00:09:07 crc kubenswrapper[5114]: body: Feb 16 00:09:07 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:06.770973521 +0000 UTC m=+23.152250369,LastTimestamp:2026-02-16 00:09:06.770973521 +0000 UTC m=+23.152250369,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 16 00:09:07 crc kubenswrapper[5114]: > Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.413737 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894918358438c4a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:41932->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:06.771012682 +0000 UTC m=+23.152289540,LastTimestamp:2026-02-16 00:09:06.771012682 +0000 UTC m=+23.152289540,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.418638 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894918358448098 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:50446->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:06.771075224 +0000 UTC m=+23.152352082,LastTimestamp:2026-02-16 00:09:06.771075224 +0000 UTC m=+23.152352082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.422634 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 16 00:09:07 crc kubenswrapper[5114]: &Event{ObjectMeta:{kube-apiserver-crc.18949183586ae7e5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 16 00:09:07 crc kubenswrapper[5114]: body: Feb 16 00:09:07 crc kubenswrapper[5114]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:06.773592037 +0000 UTC m=+23.154868895,LastTimestamp:2026-02-16 00:09:06.773592037 +0000 UTC m=+23.154868895,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 16 00:09:07 crc kubenswrapper[5114]: > Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.426555 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18949183586bbce9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:06.773646569 +0000 UTC m=+23.154923427,LastTimestamp:2026-02-16 00:09:06.773646569 +0000 UTC m=+23.154923427,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.432536 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894917f370a838a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f370a838a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.033757578 +0000 UTC m=+5.415034396,LastTimestamp:2026-02-16 00:09:06.959617667 +0000 UTC m=+23.340894485,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.436962 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894917f47ea54dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f47ea54dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.316861148 +0000 UTC m=+5.698137976,LastTimestamp:2026-02-16 00:09:07.328870077 +0000 UTC m=+23.710146895,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.441204 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894917f48e27701\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f48e27701 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.333122817 +0000 UTC m=+5.714399635,LastTimestamp:2026-02-16 00:09:07.339614759 +0000 UTC m=+23.720891587,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:07 crc kubenswrapper[5114]: I0216 00:09:07.644300 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:07 crc kubenswrapper[5114]: I0216 00:09:07.959894 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 16 00:09:07 crc kubenswrapper[5114]: I0216 00:09:07.962652 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21"} Feb 16 00:09:07 crc kubenswrapper[5114]: I0216 00:09:07.962830 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:07 crc kubenswrapper[5114]: I0216 00:09:07.963540 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:07 crc kubenswrapper[5114]: I0216 00:09:07.963576 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:07 crc kubenswrapper[5114]: I0216 00:09:07.963588 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:07 crc kubenswrapper[5114]: E0216 00:09:07.963932 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:08 crc kubenswrapper[5114]: E0216 00:09:08.325455 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.643219 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.968053 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.968840 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.970987 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21" exitCode=255 Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.971059 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21"} Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.971147 5114 scope.go:117] "RemoveContainer" containerID="5484805ff3a94ca4034b8ad5ab4faaf70ca7648c1098bd73bfba6861c2a25bf4" Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.971204 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.971802 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.971842 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.971861 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:08 crc kubenswrapper[5114]: E0216 00:09:08.972220 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:08 crc kubenswrapper[5114]: I0216 00:09:08.972542 5114 scope.go:117] "RemoveContainer" containerID="695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21" Feb 16 00:09:08 crc kubenswrapper[5114]: E0216 00:09:08.972845 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:09:08 crc kubenswrapper[5114]: E0216 00:09:08.978426 5114 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18949183db7f9dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:08.972764602 +0000 UTC m=+25.354041440,LastTimestamp:2026-02-16 00:09:08.972764602 +0000 UTC m=+25.354041440,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.620427 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.641514 5114 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.644778 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.976369 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.980144 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.981605 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.981786 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.981898 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:09 crc kubenswrapper[5114]: E0216 00:09:09.982590 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:09 crc kubenswrapper[5114]: I0216 00:09:09.983211 5114 scope.go:117] "RemoveContainer" containerID="695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21" Feb 16 00:09:09 crc kubenswrapper[5114]: E0216 00:09:09.983704 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:09:09 crc kubenswrapper[5114]: E0216 00:09:09.992941 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18949183db7f9dba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18949183db7f9dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:08.972764602 +0000 UTC m=+25.354041440,LastTimestamp:2026-02-16 00:09:09.983648065 +0000 UTC m=+26.364924903,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.621282 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.621613 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.622799 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.622853 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.622868 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:10 crc kubenswrapper[5114]: E0216 00:09:10.623318 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.626835 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.642486 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.984270 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.987687 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.987793 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:10 crc kubenswrapper[5114]: I0216 00:09:10.987819 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:10 crc kubenswrapper[5114]: E0216 00:09:10.988509 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:11 crc kubenswrapper[5114]: I0216 00:09:11.646212 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:12 crc kubenswrapper[5114]: I0216 00:09:12.642674 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:13 crc kubenswrapper[5114]: I0216 00:09:13.142197 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:13 crc kubenswrapper[5114]: I0216 00:09:13.144429 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:13 crc kubenswrapper[5114]: I0216 00:09:13.144588 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:13 crc kubenswrapper[5114]: I0216 00:09:13.144678 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:13 crc kubenswrapper[5114]: I0216 00:09:13.144784 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:09:13 crc kubenswrapper[5114]: E0216 00:09:13.159281 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 16 00:09:13 crc kubenswrapper[5114]: E0216 00:09:13.530309 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 16 00:09:13 crc kubenswrapper[5114]: I0216 00:09:13.644436 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:14 crc kubenswrapper[5114]: I0216 00:09:14.421703 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:09:14 crc kubenswrapper[5114]: I0216 00:09:14.422057 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:14 crc kubenswrapper[5114]: I0216 00:09:14.423319 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:14 crc kubenswrapper[5114]: I0216 00:09:14.423410 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:14 crc kubenswrapper[5114]: I0216 00:09:14.423434 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:14 crc kubenswrapper[5114]: E0216 00:09:14.424241 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:14 crc kubenswrapper[5114]: I0216 00:09:14.424832 5114 scope.go:117] "RemoveContainer" containerID="695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21" Feb 16 00:09:14 crc kubenswrapper[5114]: E0216 00:09:14.425239 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:09:14 crc kubenswrapper[5114]: E0216 00:09:14.433420 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18949183db7f9dba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18949183db7f9dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:08.972764602 +0000 UTC m=+25.354041440,LastTimestamp:2026-02-16 00:09:14.425179798 +0000 UTC m=+30.806456656,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:14 crc kubenswrapper[5114]: I0216 00:09:14.646319 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:15 crc kubenswrapper[5114]: E0216 00:09:15.327430 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 00:09:15 crc kubenswrapper[5114]: I0216 00:09:15.646631 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:15 crc kubenswrapper[5114]: E0216 00:09:15.865036 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:09:16 crc kubenswrapper[5114]: I0216 00:09:16.643988 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:17 crc kubenswrapper[5114]: I0216 00:09:17.645935 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:17 crc kubenswrapper[5114]: I0216 00:09:17.963829 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:09:17 crc kubenswrapper[5114]: I0216 00:09:17.964654 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:17 crc kubenswrapper[5114]: I0216 00:09:17.966225 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:17 crc kubenswrapper[5114]: I0216 00:09:17.966339 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:17 crc kubenswrapper[5114]: I0216 00:09:17.966370 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:17 crc kubenswrapper[5114]: E0216 00:09:17.967057 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:17 crc kubenswrapper[5114]: I0216 00:09:17.967523 5114 scope.go:117] "RemoveContainer" containerID="695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21" Feb 16 00:09:17 crc kubenswrapper[5114]: E0216 00:09:17.967843 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:09:17 crc kubenswrapper[5114]: E0216 00:09:17.975823 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18949183db7f9dba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18949183db7f9dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:08.972764602 +0000 UTC m=+25.354041440,LastTimestamp:2026-02-16 00:09:17.967796103 +0000 UTC m=+34.349072951,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:18 crc kubenswrapper[5114]: E0216 00:09:18.135934 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 16 00:09:18 crc kubenswrapper[5114]: I0216 00:09:18.646743 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:18 crc kubenswrapper[5114]: E0216 00:09:18.733078 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 16 00:09:19 crc kubenswrapper[5114]: E0216 00:09:19.157778 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 16 00:09:19 crc kubenswrapper[5114]: I0216 00:09:19.645549 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:20 crc kubenswrapper[5114]: I0216 00:09:20.160456 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:20 crc kubenswrapper[5114]: I0216 00:09:20.161992 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:20 crc kubenswrapper[5114]: I0216 00:09:20.162077 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:20 crc kubenswrapper[5114]: I0216 00:09:20.162103 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:20 crc kubenswrapper[5114]: I0216 00:09:20.162139 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:09:20 crc kubenswrapper[5114]: E0216 00:09:20.178295 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 16 00:09:20 crc kubenswrapper[5114]: I0216 00:09:20.646478 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:21 crc kubenswrapper[5114]: I0216 00:09:21.642528 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:22 crc kubenswrapper[5114]: E0216 00:09:22.336472 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 00:09:22 crc kubenswrapper[5114]: I0216 00:09:22.645746 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:23 crc kubenswrapper[5114]: I0216 00:09:23.645394 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:24 crc kubenswrapper[5114]: I0216 00:09:24.645532 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:25 crc kubenswrapper[5114]: I0216 00:09:25.645785 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:25 crc kubenswrapper[5114]: E0216 00:09:25.866158 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:09:26 crc kubenswrapper[5114]: I0216 00:09:26.643730 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:27 crc kubenswrapper[5114]: I0216 00:09:27.178617 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:27 crc kubenswrapper[5114]: I0216 00:09:27.180761 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:27 crc kubenswrapper[5114]: I0216 00:09:27.180817 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:27 crc kubenswrapper[5114]: I0216 00:09:27.180837 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:27 crc kubenswrapper[5114]: I0216 00:09:27.180872 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:09:27 crc kubenswrapper[5114]: E0216 00:09:27.198861 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 16 00:09:27 crc kubenswrapper[5114]: I0216 00:09:27.645761 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:28 crc kubenswrapper[5114]: I0216 00:09:28.644720 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:29 crc kubenswrapper[5114]: E0216 00:09:29.344430 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 00:09:29 crc kubenswrapper[5114]: I0216 00:09:29.644607 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:29 crc kubenswrapper[5114]: I0216 00:09:29.815869 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:29 crc kubenswrapper[5114]: I0216 00:09:29.817002 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:29 crc kubenswrapper[5114]: I0216 00:09:29.817074 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:29 crc kubenswrapper[5114]: I0216 00:09:29.817100 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:29 crc kubenswrapper[5114]: E0216 00:09:29.817868 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:29 crc kubenswrapper[5114]: I0216 00:09:29.818383 5114 scope.go:117] "RemoveContainer" containerID="695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21" Feb 16 00:09:29 crc kubenswrapper[5114]: E0216 00:09:29.829062 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894917f370a838a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f370a838a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.033757578 +0000 UTC m=+5.415034396,LastTimestamp:2026-02-16 00:09:29.820493141 +0000 UTC m=+46.201769989,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:30 crc kubenswrapper[5114]: E0216 00:09:30.099811 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894917f47ea54dc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f47ea54dc openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.316861148 +0000 UTC m=+5.698137976,LastTimestamp:2026-02-16 00:09:30.090839875 +0000 UTC m=+46.472116733,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:30 crc kubenswrapper[5114]: E0216 00:09:30.108943 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.1894917f48e27701\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.1894917f48e27701 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:08:49.333122817 +0000 UTC m=+5.714399635,LastTimestamp:2026-02-16 00:09:30.10136196 +0000 UTC m=+46.482638798,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:30 crc kubenswrapper[5114]: I0216 00:09:30.644012 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:31 crc kubenswrapper[5114]: I0216 00:09:31.048305 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 16 00:09:31 crc kubenswrapper[5114]: I0216 00:09:31.051035 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"d62fed25b3cdb3fec5e4aaccc25c4b468414fc562060a6b93a3ce6c8cc0764ed"} Feb 16 00:09:31 crc kubenswrapper[5114]: I0216 00:09:31.051465 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:31 crc kubenswrapper[5114]: I0216 00:09:31.052412 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:31 crc kubenswrapper[5114]: I0216 00:09:31.052458 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:31 crc kubenswrapper[5114]: I0216 00:09:31.052469 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:31 crc kubenswrapper[5114]: E0216 00:09:31.052900 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:31 crc kubenswrapper[5114]: I0216 00:09:31.644114 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.056545 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.057438 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.060032 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="d62fed25b3cdb3fec5e4aaccc25c4b468414fc562060a6b93a3ce6c8cc0764ed" exitCode=255 Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.060134 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"d62fed25b3cdb3fec5e4aaccc25c4b468414fc562060a6b93a3ce6c8cc0764ed"} Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.060216 5114 scope.go:117] "RemoveContainer" containerID="695682502bb8903d4d7088cd82100a8471642a4b2a343da17a220a5cd2f16d21" Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.060644 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.061560 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.061605 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.061629 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:32 crc kubenswrapper[5114]: E0216 00:09:32.062095 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.062857 5114 scope.go:117] "RemoveContainer" containerID="d62fed25b3cdb3fec5e4aaccc25c4b468414fc562060a6b93a3ce6c8cc0764ed" Feb 16 00:09:32 crc kubenswrapper[5114]: E0216 00:09:32.063174 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:09:32 crc kubenswrapper[5114]: E0216 00:09:32.070961 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18949183db7f9dba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18949183db7f9dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:08.972764602 +0000 UTC m=+25.354041440,LastTimestamp:2026-02-16 00:09:32.063132486 +0000 UTC m=+48.444409334,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:32 crc kubenswrapper[5114]: I0216 00:09:32.645379 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:33 crc kubenswrapper[5114]: E0216 00:09:33.032087 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Feb 16 00:09:33 crc kubenswrapper[5114]: I0216 00:09:33.067116 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 16 00:09:33 crc kubenswrapper[5114]: I0216 00:09:33.645862 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.199393 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.201061 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.201145 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.201165 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.201209 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:09:34 crc kubenswrapper[5114]: E0216 00:09:34.218547 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.421699 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.422369 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.423827 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.423933 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.423963 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:34 crc kubenswrapper[5114]: E0216 00:09:34.424733 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.425213 5114 scope.go:117] "RemoveContainer" containerID="d62fed25b3cdb3fec5e4aaccc25c4b468414fc562060a6b93a3ce6c8cc0764ed" Feb 16 00:09:34 crc kubenswrapper[5114]: E0216 00:09:34.425616 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:09:34 crc kubenswrapper[5114]: E0216 00:09:34.434725 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18949183db7f9dba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18949183db7f9dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:08.972764602 +0000 UTC m=+25.354041440,LastTimestamp:2026-02-16 00:09:34.425553482 +0000 UTC m=+50.806830340,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:34 crc kubenswrapper[5114]: I0216 00:09:34.644787 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:35 crc kubenswrapper[5114]: I0216 00:09:35.645784 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:35 crc kubenswrapper[5114]: E0216 00:09:35.867409 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:09:36 crc kubenswrapper[5114]: E0216 00:09:36.353051 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 00:09:36 crc kubenswrapper[5114]: I0216 00:09:36.645584 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:36 crc kubenswrapper[5114]: E0216 00:09:36.661619 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Feb 16 00:09:37 crc kubenswrapper[5114]: I0216 00:09:37.644296 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:37 crc kubenswrapper[5114]: E0216 00:09:37.882386 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Feb 16 00:09:38 crc kubenswrapper[5114]: I0216 00:09:38.645396 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:39 crc kubenswrapper[5114]: I0216 00:09:39.643848 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:40 crc kubenswrapper[5114]: I0216 00:09:40.645819 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:40 crc kubenswrapper[5114]: I0216 00:09:40.902657 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:09:40 crc kubenswrapper[5114]: I0216 00:09:40.902939 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:40 crc kubenswrapper[5114]: I0216 00:09:40.903951 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:40 crc kubenswrapper[5114]: I0216 00:09:40.904007 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:40 crc kubenswrapper[5114]: I0216 00:09:40.904028 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:40 crc kubenswrapper[5114]: E0216 00:09:40.904581 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.052668 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.053030 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.054235 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.054326 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.054350 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:41 crc kubenswrapper[5114]: E0216 00:09:41.054913 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.055352 5114 scope.go:117] "RemoveContainer" containerID="d62fed25b3cdb3fec5e4aaccc25c4b468414fc562060a6b93a3ce6c8cc0764ed" Feb 16 00:09:41 crc kubenswrapper[5114]: E0216 00:09:41.055651 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:09:41 crc kubenswrapper[5114]: E0216 00:09:41.064316 5114 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18949183db7f9dba\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18949183db7f9dba openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:09:08.972764602 +0000 UTC m=+25.354041440,LastTimestamp:2026-02-16 00:09:41.055603153 +0000 UTC m=+57.436879991,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.219243 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.220475 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.220545 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.220572 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.220617 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:09:41 crc kubenswrapper[5114]: E0216 00:09:41.234292 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 16 00:09:41 crc kubenswrapper[5114]: E0216 00:09:41.607114 5114 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Feb 16 00:09:41 crc kubenswrapper[5114]: I0216 00:09:41.647171 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:42 crc kubenswrapper[5114]: I0216 00:09:42.645348 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:43 crc kubenswrapper[5114]: E0216 00:09:43.363423 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 00:09:43 crc kubenswrapper[5114]: I0216 00:09:43.646461 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:44 crc kubenswrapper[5114]: I0216 00:09:44.645881 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:45 crc kubenswrapper[5114]: I0216 00:09:45.644526 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:45 crc kubenswrapper[5114]: E0216 00:09:45.868507 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:09:46 crc kubenswrapper[5114]: I0216 00:09:46.646427 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:47 crc kubenswrapper[5114]: I0216 00:09:47.646652 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:48 crc kubenswrapper[5114]: I0216 00:09:48.234640 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:48 crc kubenswrapper[5114]: I0216 00:09:48.236553 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:48 crc kubenswrapper[5114]: I0216 00:09:48.236619 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:48 crc kubenswrapper[5114]: I0216 00:09:48.236642 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:48 crc kubenswrapper[5114]: I0216 00:09:48.236679 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:09:48 crc kubenswrapper[5114]: E0216 00:09:48.251863 5114 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 16 00:09:48 crc kubenswrapper[5114]: I0216 00:09:48.645898 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:49 crc kubenswrapper[5114]: I0216 00:09:49.643511 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:50 crc kubenswrapper[5114]: E0216 00:09:50.369742 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 16 00:09:50 crc kubenswrapper[5114]: I0216 00:09:50.642370 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:51 crc kubenswrapper[5114]: I0216 00:09:51.645203 5114 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 16 00:09:52 crc kubenswrapper[5114]: I0216 00:09:52.441912 5114 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-67rwl" Feb 16 00:09:52 crc kubenswrapper[5114]: I0216 00:09:52.451390 5114 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-67rwl" Feb 16 00:09:52 crc kubenswrapper[5114]: I0216 00:09:52.500340 5114 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 16 00:09:53 crc kubenswrapper[5114]: I0216 00:09:53.163719 5114 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 16 00:09:53 crc kubenswrapper[5114]: I0216 00:09:53.452881 5114 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-03-18 00:04:52 +0000 UTC" deadline="2026-03-10 19:05:04.890470318 +0000 UTC" Feb 16 00:09:53 crc kubenswrapper[5114]: I0216 00:09:53.452959 5114 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="546h55m11.437517576s" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.252993 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.254291 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.254368 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.254390 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.254549 5114 kubelet_node_status.go:78] "Attempting to register node" node="crc" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.272584 5114 kubelet_node_status.go:127] "Node was previously registered" node="crc" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.272984 5114 kubelet_node_status.go:81] "Successfully registered node" node="crc" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.273027 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.277937 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.278009 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.278030 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.278106 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.278132 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:09:55Z","lastTransitionTime":"2026-02-16T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.294093 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.301410 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.301494 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.301522 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.301554 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.301581 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:09:55Z","lastTransitionTime":"2026-02-16T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.311282 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.318491 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.318555 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.318583 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.318615 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.318640 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:09:55Z","lastTransitionTime":"2026-02-16T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.329409 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.338958 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.339011 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.339030 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.339052 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.339069 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:09:55Z","lastTransitionTime":"2026-02-16T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.355941 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.356150 5114 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.356189 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.456741 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.557405 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.657910 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.758034 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.816182 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.817287 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.817348 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.817361 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.818102 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:55 crc kubenswrapper[5114]: I0216 00:09:55.818440 5114 scope.go:117] "RemoveContainer" containerID="d62fed25b3cdb3fec5e4aaccc25c4b468414fc562060a6b93a3ce6c8cc0764ed" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.858301 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.869698 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:09:55 crc kubenswrapper[5114]: E0216 00:09:55.959265 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.059845 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: I0216 00:09:56.158732 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.159962 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: I0216 00:09:56.160840 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6"} Feb 16 00:09:56 crc kubenswrapper[5114]: I0216 00:09:56.161148 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:56 crc kubenswrapper[5114]: I0216 00:09:56.161976 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:56 crc kubenswrapper[5114]: I0216 00:09:56.162046 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:56 crc kubenswrapper[5114]: I0216 00:09:56.162069 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.162852 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.260921 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.361858 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.462803 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.563329 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.663661 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.764377 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.865485 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:56 crc kubenswrapper[5114]: E0216 00:09:56.966464 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.067336 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.166277 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.166973 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.167448 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.169050 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" exitCode=255 Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.169120 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6"} Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.169163 5114 scope.go:117] "RemoveContainer" containerID="d62fed25b3cdb3fec5e4aaccc25c4b468414fc562060a6b93a3ce6c8cc0764ed" Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.169435 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.170270 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.170327 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.170341 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.170860 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:09:57 crc kubenswrapper[5114]: I0216 00:09:57.171147 5114 scope.go:117] "RemoveContainer" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.171422 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.268184 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.369068 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.469830 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.570941 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.671985 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.772606 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.873167 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:57 crc kubenswrapper[5114]: E0216 00:09:57.974350 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.075337 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: I0216 00:09:58.174030 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.175491 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.276114 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.376809 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.477098 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.577482 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.678388 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.779538 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.879778 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:58 crc kubenswrapper[5114]: E0216 00:09:58.980672 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.081505 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.182632 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.282965 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.383179 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.484309 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.584503 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.684914 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.785322 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.885786 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:09:59 crc kubenswrapper[5114]: E0216 00:09:59.986905 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.087955 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.189011 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.289175 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.390091 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.490549 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.591738 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.691879 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.792322 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.892415 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:00 crc kubenswrapper[5114]: E0216 00:10:00.992878 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.093303 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.194240 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.295356 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.396314 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.497375 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.597975 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.699020 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.799495 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:01 crc kubenswrapper[5114]: E0216 00:10:01.899996 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.000999 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.101562 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.202429 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.302988 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.404142 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.504532 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.605096 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.705356 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.805573 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:02 crc kubenswrapper[5114]: E0216 00:10:02.906050 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.006825 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.107227 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.208335 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.309122 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.409748 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.510978 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.612195 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.712998 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.813131 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:03 crc kubenswrapper[5114]: E0216 00:10:03.913297 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.014284 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.114656 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.215367 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.316045 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.417210 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: I0216 00:10:04.421518 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:04 crc kubenswrapper[5114]: I0216 00:10:04.421985 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:10:04 crc kubenswrapper[5114]: I0216 00:10:04.423428 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:04 crc kubenswrapper[5114]: I0216 00:10:04.423483 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:04 crc kubenswrapper[5114]: I0216 00:10:04.423499 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.424068 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:10:04 crc kubenswrapper[5114]: I0216 00:10:04.424407 5114 scope.go:117] "RemoveContainer" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.424667 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.518084 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.618499 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.719115 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.819528 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:04 crc kubenswrapper[5114]: E0216 00:10:04.919905 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.020342 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.121358 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.221794 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.322404 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.413520 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.419322 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.419379 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.419399 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.419426 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.419444 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:05Z","lastTransitionTime":"2026-02-16T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.436077 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.448657 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.448794 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.448818 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.448846 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.448867 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:05Z","lastTransitionTime":"2026-02-16T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.465714 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.477397 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.477461 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.477481 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.477505 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.477565 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:05Z","lastTransitionTime":"2026-02-16T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.493869 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.506392 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.506491 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.506521 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.506561 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.506590 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:05Z","lastTransitionTime":"2026-02-16T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.521863 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.522145 5114 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.522178 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.622564 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.723401 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.816596 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.817740 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.817798 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:05 crc kubenswrapper[5114]: I0216 00:10:05.817824 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.818470 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.823926 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.870535 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:10:05 crc kubenswrapper[5114]: E0216 00:10:05.924239 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.024991 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.125595 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.162010 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.162467 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.163889 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.163961 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.163987 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.165141 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.165770 5114 scope.go:117] "RemoveContainer" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.166427 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.226539 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.326638 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.427590 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.527825 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.628633 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.729748 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.816554 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.818232 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.818497 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:06 crc kubenswrapper[5114]: I0216 00:10:06.818636 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.819616 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.830608 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:06 crc kubenswrapper[5114]: E0216 00:10:06.931500 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.032554 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.133767 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.234290 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.335340 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.436075 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.537281 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.637460 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.738479 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.839696 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:07 crc kubenswrapper[5114]: E0216 00:10:07.939924 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.040498 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.141172 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.241642 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.343142 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.443577 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.543903 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.644311 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.744607 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.844783 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:08 crc kubenswrapper[5114]: E0216 00:10:08.945586 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.046564 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.147678 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.248877 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.349929 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.450099 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.550673 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.651317 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.752007 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.852897 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:09 crc kubenswrapper[5114]: E0216 00:10:09.953962 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.054534 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.154676 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.254877 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.355608 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.455717 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.555893 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.656458 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.757153 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.857549 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:10 crc kubenswrapper[5114]: E0216 00:10:10.958043 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.059016 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.159530 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.259919 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.360914 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.462068 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.562806 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.663838 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.764415 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.865452 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:11 crc kubenswrapper[5114]: E0216 00:10:11.965970 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.066818 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.167412 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.268583 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.368782 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.469951 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.571047 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.671948 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.772435 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: I0216 00:10:12.843882 5114 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.873075 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:12 crc kubenswrapper[5114]: E0216 00:10:12.973903 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.074926 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.175981 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.276561 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.377095 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.477761 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.578165 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.678948 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.780098 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.880209 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:13 crc kubenswrapper[5114]: E0216 00:10:13.981123 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.082294 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.182639 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.283786 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.384374 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.485032 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.585223 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.686571 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.787836 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.888609 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:14 crc kubenswrapper[5114]: E0216 00:10:14.988955 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.089728 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.190697 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.291314 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.391814 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.492816 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.593506 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.694191 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.794660 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.821847 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.828453 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.828551 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.828578 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.828615 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.828636 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:15Z","lastTransitionTime":"2026-02-16T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.845953 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.851171 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.851239 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.851275 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.851302 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.851321 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:15Z","lastTransitionTime":"2026-02-16T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.867746 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.870989 5114 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.874083 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.874163 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.874183 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.874211 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.874292 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:15Z","lastTransitionTime":"2026-02-16T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.891702 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.896318 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.896399 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.896421 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.896450 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:15 crc kubenswrapper[5114]: I0216 00:10:15.896472 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:15Z","lastTransitionTime":"2026-02-16T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.912808 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.913170 5114 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:10:15 crc kubenswrapper[5114]: E0216 00:10:15.913223 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.013397 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.114556 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.214906 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.316042 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.417132 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.517645 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.618669 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.719086 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.819584 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:16 crc kubenswrapper[5114]: E0216 00:10:16.920699 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.021842 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.123053 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.223482 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.324664 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.425369 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.526158 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.626671 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.727906 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.828066 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:17 crc kubenswrapper[5114]: E0216 00:10:17.928960 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.030131 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.130846 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.231474 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.332078 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.432561 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.533057 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.633539 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.733975 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: I0216 00:10:18.815890 5114 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Feb 16 00:10:18 crc kubenswrapper[5114]: I0216 00:10:18.816997 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:18 crc kubenswrapper[5114]: I0216 00:10:18.817052 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:18 crc kubenswrapper[5114]: I0216 00:10:18.817071 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.817718 5114 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Feb 16 00:10:18 crc kubenswrapper[5114]: I0216 00:10:18.818099 5114 scope.go:117] "RemoveContainer" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.818419 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.834757 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:18 crc kubenswrapper[5114]: E0216 00:10:18.935976 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.036954 5114 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.042937 5114 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.112212 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.137685 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.140440 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.140534 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.140554 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.140579 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.140599 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.233864 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.243123 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.243180 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.243195 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.243215 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.243232 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.335498 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.346090 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.346130 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.346141 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.346161 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.346179 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.434538 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.449007 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.449065 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.449082 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.449123 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.449141 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.551345 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.551408 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.551422 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.551445 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.551459 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.651413 5114 apiserver.go:52] "Watching apiserver" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.653787 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.653880 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.653901 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.653931 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.653952 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.660908 5114 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.661792 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/multus-additional-cni-plugins-wlt2s","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf","openshift-image-registry/node-ca-72dpq","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-network-node-identity/network-node-identity-dgvkt","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/machine-config-daemon-vp5kn","openshift-multus/network-metrics-daemon-vk5fl","openshift-network-diagnostics/network-check-target-fhkjl","openshift-ovn-kubernetes/ovnkube-node-9clwb","openshift-dns/node-resolver-zp67w","openshift-multus/multus-5jlj6","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-etcd/etcd-crc"] Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.664193 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.669883 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.670017 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.670087 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.671462 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.671668 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.672023 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.672157 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.674659 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.676000 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.676872 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.679194 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.679668 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.680102 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.680293 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.680597 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.685771 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.685906 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.686198 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.688824 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.688916 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.689699 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.692734 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.695530 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.696220 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.696875 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.697489 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.697657 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.697671 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.697669 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.698374 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.699334 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.702179 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.702582 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.702586 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.703079 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.703600 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.705672 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.705798 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.712048 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.712661 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.715150 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.715663 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.715818 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.715935 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.716040 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.716296 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.716509 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.717407 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.718099 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.719079 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.721695 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.721846 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.721929 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.723777 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.731848 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.734724 5114 scope.go:117] "RemoveContainer" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.754238 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.755700 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.755850 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.762680 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.762826 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.762891 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.762961 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.763024 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.766700 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.785721 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786452 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786510 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-cni-multus\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786541 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786570 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786596 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786618 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786642 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786667 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4627438-b1a6-4cc9-85f6-10e9dd97943b-cni-binary-copy\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786691 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq4ff\" (UniqueName: \"kubernetes.io/projected/c4627438-b1a6-4cc9-85f6-10e9dd97943b-kube-api-access-pq4ff\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786717 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786749 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786773 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-slash\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786795 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skmcq\" (UniqueName: \"kubernetes.io/projected/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-kube-api-access-skmcq\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786816 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-system-cni-dir\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786839 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-host\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786862 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786882 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-netns\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786905 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-hostroot\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786923 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-conf-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786944 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-cnibin\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786964 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovn-node-metrics-cert\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.786983 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787004 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-cnibin\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787030 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787052 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-etc-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787073 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-ovn-kubernetes\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787093 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-socket-dir-parent\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787113 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787137 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42vvz\" (UniqueName: \"kubernetes.io/projected/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-kube-api-access-42vvz\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787160 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-os-release\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787194 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787218 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-netns\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787240 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-env-overrides\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787360 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-system-cni-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787468 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-hosts-file\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787535 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-cni-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787612 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-serviceca\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787696 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787736 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-systemd-units\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787780 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-systemd\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787808 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-cni-binary-copy\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787831 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-bin\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787853 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-netd\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787873 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-config\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787894 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-ovn\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787914 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-node-log\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787937 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-script-lib\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787957 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-kubelet\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.787989 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788012 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-kubelet\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788034 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-log-socket\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788055 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-os-release\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788075 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-daemon-config\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788103 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788125 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-etc-kubernetes\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788147 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2glh5\" (UniqueName: \"kubernetes.io/projected/e654f43c-5ba1-48a5-87ae-f6672304d245-kube-api-access-2glh5\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788171 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-proxy-tls\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788194 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788218 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxrth\" (UniqueName: \"kubernetes.io/projected/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-kube-api-access-qxrth\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788239 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-tmp-dir\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788290 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-cni-bin\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788342 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-multus-certs\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.788496 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788560 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrjj\" (UniqueName: \"kubernetes.io/projected/d6149fdd-e85e-41f7-b50a-76f70c153c44-kube-api-access-thrjj\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788584 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctpq7\" (UniqueName: \"kubernetes.io/projected/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-kube-api-access-ctpq7\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788609 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788637 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788679 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788701 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-k8s-cni-cncf-io\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788722 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-mcd-auth-proxy-config\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788756 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-var-lib-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.788777 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-rootfs\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.790071 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.790294 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:20.290234632 +0000 UTC m=+96.671511450 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.790398 5114 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.790218 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.790716 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:20.290701756 +0000 UTC m=+96.671978594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.797642 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.804419 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.804975 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.812656 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.812870 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.812885 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.812896 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.812955 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:20.31293823 +0000 UTC m=+96.694215048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.813876 5114 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.817347 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.819889 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.821765 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.822322 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.826451 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.826480 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.826493 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.826545 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:20.326529064 +0000 UTC m=+96.707805882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.836222 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.849081 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.859093 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zp67w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-skmcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zp67w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.865647 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.865752 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.865772 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.865802 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.865829 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.874933 5114 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.885387 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9clwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889630 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889695 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889724 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889774 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889803 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889847 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889875 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889898 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889952 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.889999 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890025 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890048 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890095 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890124 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890171 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890197 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890239 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890284 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890328 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890357 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890400 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890425 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890451 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890494 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890525 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890573 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890602 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890652 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890686 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890731 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890762 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890807 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890837 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890880 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890906 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890931 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.890975 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891001 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891050 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891079 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891320 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891359 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891403 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891429 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891503 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891568 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891598 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891645 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891674 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891722 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891750 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891801 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891840 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891894 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891923 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891967 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.891997 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.892043 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.892074 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.892127 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.892170 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.892958 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.892850 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.893623 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.893624 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.892233 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.894338 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.894383 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.894607 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.894791 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.894928 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895064 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895173 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895415 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895624 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895662 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895770 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895784 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895810 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.896160 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.896157 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.896666 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.896588 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.895217 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.896824 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897352 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897176 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897608 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897625 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897688 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897756 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897799 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897831 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897863 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897894 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897932 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897959 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.897991 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898019 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898053 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898081 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898113 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898140 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898168 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898214 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898238 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898283 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898304 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898311 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898335 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898366 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898396 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898424 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898450 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898482 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898505 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898529 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898553 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898580 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898609 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898637 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898664 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898690 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898713 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898738 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898763 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898789 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898814 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898807 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898842 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898817 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898873 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898912 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898942 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898968 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898988 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898996 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.898997 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899025 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899035 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899052 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899064 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899048 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899151 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899196 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899234 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899372 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899426 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899705 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899770 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899809 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899847 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899859 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899891 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899927 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.899965 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900004 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900059 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900072 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900098 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900136 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900173 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900214 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900279 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900290 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900325 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900349 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900349 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900395 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900369 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900487 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900526 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900794 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900850 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901298 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901828 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901874 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901943 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901978 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902004 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902026 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902047 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902067 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902085 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902102 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902122 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902145 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902164 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902207 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902230 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902288 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902309 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902328 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902347 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902366 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902386 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902404 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902427 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902448 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902466 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902489 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902509 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902544 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902565 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902597 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902620 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902644 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902664 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902685 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902708 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902727 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902746 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902768 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902789 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902809 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902830 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902851 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902871 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902896 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902918 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902940 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902960 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902980 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903027 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903068 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903088 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903112 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903136 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903159 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903182 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903212 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903238 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903281 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903308 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903339 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903366 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903392 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903421 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903453 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903482 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903509 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903536 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903563 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903592 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903620 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903647 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903677 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903704 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903728 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903756 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903881 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903921 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903954 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903984 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904014 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904050 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904085 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904138 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904185 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904516 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-serviceca\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904572 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-systemd-units\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904606 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-systemd\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904642 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-cni-binary-copy\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904671 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-bin\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904701 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-netd\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904731 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-config\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904794 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-ovn\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904832 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-node-log\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904870 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-script-lib\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905048 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-kubelet\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905183 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-kubelet\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905467 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-log-socket\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905496 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-os-release\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905522 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-daemon-config\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905567 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-etc-kubernetes\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905601 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2glh5\" (UniqueName: \"kubernetes.io/projected/e654f43c-5ba1-48a5-87ae-f6672304d245-kube-api-access-2glh5\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905650 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-proxy-tls\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905698 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905740 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phgcx\" (UniqueName: \"kubernetes.io/projected/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-kube-api-access-phgcx\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905785 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905823 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qxrth\" (UniqueName: \"kubernetes.io/projected/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-kube-api-access-qxrth\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905849 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-tmp-dir\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905881 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-cni-bin\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905913 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-multus-certs\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905942 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900809 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905981 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-thrjj\" (UniqueName: \"kubernetes.io/projected/d6149fdd-e85e-41f7-b50a-76f70c153c44-kube-api-access-thrjj\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900835 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901147 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901287 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901429 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.900727 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901550 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.901794 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902029 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902377 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.902506 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903925 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.903907 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.906132 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904425 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904704 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904786 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904806 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.904814 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905108 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905421 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905571 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.906385 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905655 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.905986 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.906602 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.906622 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.906813 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.906916 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.906969 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907100 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ctpq7\" (UniqueName: \"kubernetes.io/projected/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-kube-api-access-ctpq7\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907170 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907217 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-k8s-cni-cncf-io\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907280 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-mcd-auth-proxy-config\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907317 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-var-lib-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907354 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-rootfs\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907394 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907431 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-cni-multus\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907464 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907517 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908033 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4627438-b1a6-4cc9-85f6-10e9dd97943b-cni-binary-copy\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908076 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pq4ff\" (UniqueName: \"kubernetes.io/projected/c4627438-b1a6-4cc9-85f6-10e9dd97943b-kube-api-access-pq4ff\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908106 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908171 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-slash\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908203 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-skmcq\" (UniqueName: \"kubernetes.io/projected/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-kube-api-access-skmcq\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908235 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-system-cni-dir\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908295 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-host\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908329 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-netns\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908356 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-hostroot\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908387 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-conf-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908417 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-cnibin\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908452 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovn-node-metrics-cert\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908483 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908514 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-cnibin\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908550 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-etc-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908617 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-ovn-kubernetes\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908658 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-socket-dir-parent\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908694 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908725 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-42vvz\" (UniqueName: \"kubernetes.io/projected/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-kube-api-access-42vvz\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908761 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.906792 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36e77927-3498-4ebe-bcc5-62b9ddc1ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:09:56Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0216 00:09:56.366393 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 00:09:56.366553 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0216 00:09:56.367494 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1793456589/tls.crt::/tmp/serving-cert-1793456589/tls.key\\\\\\\"\\\\nI0216 00:09:56.738309 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 00:09:56.741479 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 00:09:56.741559 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 00:09:56.741646 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 00:09:56.741696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 00:09:56.750507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 00:09:56.750535 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 00:09:56.750564 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750574 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 00:09:56.750585 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 00:09:56.750589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 00:09:56.750593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 00:09:56.751514 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:09:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908798 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-os-release\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908852 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-netns\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908886 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-env-overrides\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908919 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-system-cni-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908980 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-hosts-file\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909014 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-cni-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909163 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909191 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909209 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909226 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909265 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909284 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909302 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909323 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909338 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909353 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909370 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909386 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909402 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909419 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909436 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909451 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909468 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909482 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909496 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909513 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909531 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909545 5114 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909561 5114 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909573 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909586 5114 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909603 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909621 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909638 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909653 5114 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909668 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909683 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909699 5114 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909714 5114 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909728 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909744 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909759 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909773 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909787 5114 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909801 5114 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909817 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909828 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909838 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909849 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909860 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909870 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909884 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909895 5114 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909907 5114 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909918 5114 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909929 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909941 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909952 5114 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909962 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909973 5114 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909983 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909994 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910005 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910016 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910028 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910040 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910053 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910065 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910078 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910090 5114 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910102 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910115 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910330 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-cni-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910826 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910860 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-k8s-cni-cncf-io\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907233 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907707 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907797 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.907934 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908045 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908206 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908604 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908599 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908632 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.908642 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.909687 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910162 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910436 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910596 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.910695 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.911964 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-hostroot\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.912012 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-var-lib-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.911305 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.911343 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.912059 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-cnibin\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.911925 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-conf-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.912504 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.912568 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.912917 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.912992 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.913328 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.913493 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-mcd-auth-proxy-config\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.913580 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.913690 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.914315 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.914316 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.914574 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.914652 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.914829 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.914977 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915081 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915087 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915376 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915463 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-cnibin\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915505 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-etc-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915642 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-cni-multus\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915726 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915759 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915844 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.915860 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-socket-dir-parent\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916069 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-kubelet\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916270 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916388 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916551 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916660 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-ovn-kubernetes\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916808 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-slash\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916816 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-log-socket\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916894 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-os-release\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916944 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-etc-kubernetes\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.916996 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-host\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.917029 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-rootfs\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.917055 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-openvswitch\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.917073 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-kubelet\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.917166 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.917193 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.917679 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-os-release\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.917954 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-netns\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.918038 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.918528 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-systemd-units\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.917749 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-netns\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.919869 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c4627438-b1a6-4cc9-85f6-10e9dd97943b-cni-binary-copy\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.920224 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.920515 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.920647 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/c4627438-b1a6-4cc9-85f6-10e9dd97943b-multus-daemon-config\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.921101 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.921182 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs podName:d6149fdd-e85e-41f7-b50a-76f70c153c44 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:20.421158616 +0000 UTC m=+96.802435444 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs") pod "network-metrics-daemon-vk5fl" (UID: "d6149fdd-e85e-41f7-b50a-76f70c153c44") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.921634 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovn-node-metrics-cert\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: E0216 00:10:19.922533 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:10:20.422513185 +0000 UTC m=+96.803790013 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.923625 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-env-overrides\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.924472 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.925174 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.926197 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-serviceca\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.926498 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-proxy-tls\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.927363 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.928233 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.928535 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.928738 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-system-cni-dir\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.928833 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-hosts-file\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.928876 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-systemd\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.929020 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.929097 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-var-lib-cni-bin\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.929134 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-bin\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.929462 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-netd\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.929890 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/c4627438-b1a6-4cc9-85f6-10e9dd97943b-host-run-multus-certs\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.930404 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-tmp-dir\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.930623 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e654f43c-5ba1-48a5-87ae-f6672304d245-cni-binary-copy\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.930661 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.930877 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-config\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.930925 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-ovn\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.930949 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-node-log\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.931400 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.931537 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctpq7\" (UniqueName: \"kubernetes.io/projected/a17caad8-b1e3-46bb-a3fe-843bba1b8f97-kube-api-access-ctpq7\") pod \"node-ca-72dpq\" (UID: \"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\") " pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.931657 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.932200 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.933169 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.933296 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-script-lib\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.933598 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.933781 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.933856 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e654f43c-5ba1-48a5-87ae-f6672304d245-system-cni-dir\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.934056 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.934359 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.934366 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.935306 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.936409 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.936542 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.937129 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.936922 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.937628 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.938116 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.938154 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.937981 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.938322 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.938650 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.939460 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.940426 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.940453 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.940894 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.941439 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.941800 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.941878 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.941977 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-skmcq\" (UniqueName: \"kubernetes.io/projected/cbb290fa-349e-4aa8-b21a-00ef48fba6e7-kube-api-access-skmcq\") pod \"node-resolver-zp67w\" (UID: \"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\") " pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.942335 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.942588 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.942612 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-42vvz\" (UniqueName: \"kubernetes.io/projected/b6929dc4-3c97-49e3-b4c6-cc35d5e7b917-kube-api-access-42vvz\") pod \"machine-config-daemon-vp5kn\" (UID: \"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\") " pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.942784 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.943037 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vp5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.943096 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.943114 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.943122 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.943612 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.943680 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.943982 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.944235 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.944351 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.944383 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.944464 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.944664 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.944721 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.944933 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.943840 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.945136 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.945320 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.945427 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.945691 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.945743 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.945839 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.946129 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.946620 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.946633 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.946359 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.946507 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.946800 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2glh5\" (UniqueName: \"kubernetes.io/projected/e654f43c-5ba1-48a5-87ae-f6672304d245-kube-api-access-2glh5\") pod \"multus-additional-cni-plugins-wlt2s\" (UID: \"e654f43c-5ba1-48a5-87ae-f6672304d245\") " pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.946905 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.946914 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.947287 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.947418 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.947442 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.947582 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.947803 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.947847 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948028 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948175 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948262 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948334 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948391 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948709 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948800 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948766 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.948901 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.949093 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.949194 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.949403 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.949516 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.949736 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq4ff\" (UniqueName: \"kubernetes.io/projected/c4627438-b1a6-4cc9-85f6-10e9dd97943b-kube-api-access-pq4ff\") pod \"multus-5jlj6\" (UID: \"c4627438-b1a6-4cc9-85f6-10e9dd97943b\") " pod="openshift-multus/multus-5jlj6" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.950104 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.950188 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.950287 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.950409 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.950704 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.950889 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.951077 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrjj\" (UniqueName: \"kubernetes.io/projected/d6149fdd-e85e-41f7-b50a-76f70c153c44-kube-api-access-thrjj\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.951852 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.952022 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.952274 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.952307 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.952476 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.952712 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.952727 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.952837 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.952862 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.953060 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.953079 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.953117 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.953170 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.953555 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.953809 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.953955 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.954055 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.954236 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.954415 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.954446 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vk5fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6149fdd-e85e-41f7-b50a-76f70c153c44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vk5fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.954668 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.955188 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.957046 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxrth\" (UniqueName: \"kubernetes.io/projected/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-kube-api-access-qxrth\") pod \"ovnkube-node-9clwb\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.958492 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.959036 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.961956 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.966349 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.967942 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-5jlj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4627438-b1a6-4cc9-85f6-10e9dd97943b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq4ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jlj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.969931 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.969995 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.970012 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.970029 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.970046 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:19Z","lastTransitionTime":"2026-02-16T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.973495 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.975585 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zp67w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-skmcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zp67w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.979310 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.986115 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.992730 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9clwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:19 crc kubenswrapper[5114]: I0216 00:10:19.999850 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.000719 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.002032 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 16 00:10:20 crc kubenswrapper[5114]: set -o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: source /etc/kubernetes/apiserver-url.env Feb 16 00:10:20 crc kubenswrapper[5114]: else Feb 16 00:10:20 crc kubenswrapper[5114]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 16 00:10:20 crc kubenswrapper[5114]: exit 1 Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.003242 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.004514 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.012433 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.012499 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phgcx\" (UniqueName: \"kubernetes.io/projected/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-kube-api-access-phgcx\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.012570 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.012767 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013059 5114 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013094 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013110 5114 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013125 5114 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013139 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013162 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013177 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013194 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013208 5114 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013227 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.013269 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.014463 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015594 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015789 5114 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015815 5114 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015836 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015853 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015868 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015888 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015902 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015918 5114 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015932 5114 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015953 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015970 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015985 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015711 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"764b478d-1d01-4d84-b45d-6590a38497c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b10c64884bbd71e2157b1670c58209bda6bd063665c1ac3d058e91ad3a7fc7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://58ea7cf355069731d736ded1f9a033e00b7f747f4a993b9d00516ab40c56d783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://05b2d05490e4cfff0b22711d5a8c00f6728fa0e633a8b993400a629d4424fb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://33765468880ba21c7b0362a460e75d6e28decbeb2daa74e65202f1e4ac174738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc05bbf6d8b5e02515a1cbcd8639ce40b8118b0262ad8073c708dfa30ba9a54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.015997 5114 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016281 5114 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016361 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016394 5114 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016407 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016423 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016438 5114 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016481 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016496 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016509 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016526 5114 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016540 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016584 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016599 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016702 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016717 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016792 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016810 5114 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016850 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016872 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016887 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.016991 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017013 5114 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017029 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017044 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017065 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017079 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017093 5114 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017107 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017125 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017140 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017154 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017167 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017186 5114 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017199 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017216 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017233 5114 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017264 5114 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017279 5114 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017293 5114 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017312 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017330 5114 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017346 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017359 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017375 5114 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017390 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017404 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017419 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017436 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017449 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017463 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017481 5114 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017494 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017508 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017521 5114 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017538 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017551 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017566 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017580 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017597 5114 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017610 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017627 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017642 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017659 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017672 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017684 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017772 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017788 5114 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017802 5114 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017819 5114 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017837 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017853 5114 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017891 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017909 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017933 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017947 5114 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017961 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017977 5114 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.017991 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018004 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018017 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018035 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018051 5114 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018065 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018417 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018447 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018460 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018480 5114 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018492 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018503 5114 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018514 5114 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018528 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018538 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018548 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018559 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018572 5114 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018582 5114 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018597 5114 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018608 5114 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018617 5114 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018627 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018676 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018691 5114 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018725 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018736 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018746 5114 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018759 5114 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018772 5114 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018781 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018793 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018802 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018811 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018821 5114 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018836 5114 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018845 5114 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018854 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018864 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018876 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018887 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018897 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018908 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018920 5114 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018930 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018938 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018950 5114 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018960 5114 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018970 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018980 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.018992 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.019001 5114 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.019012 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.019021 5114 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.019033 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.020865 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: set -o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: source "/env/_master" Feb 16 00:10:20 crc kubenswrapper[5114]: set +o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 16 00:10:20 crc kubenswrapper[5114]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 16 00:10:20 crc kubenswrapper[5114]: ho_enable="--enable-hybrid-overlay" Feb 16 00:10:20 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 16 00:10:20 crc kubenswrapper[5114]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 16 00:10:20 crc kubenswrapper[5114]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --webhook-host=127.0.0.1 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --webhook-port=9743 \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ho_enable} \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-interconnect \ Feb 16 00:10:20 crc kubenswrapper[5114]: --disable-approver \ Feb 16 00:10:20 crc kubenswrapper[5114]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --wait-for-kubernetes-api=200s \ Feb 16 00:10:20 crc kubenswrapper[5114]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.022205 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.024174 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.024199 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: set -o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: source "/env/_master" Feb 16 00:10:20 crc kubenswrapper[5114]: set +o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --disable-webhook \ Feb 16 00:10:20 crc kubenswrapper[5114]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.027185 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.030440 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.032283 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phgcx\" (UniqueName: \"kubernetes.io/projected/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-kube-api-access-phgcx\") pod \"ovnkube-control-plane-57b78d8988-44hnf\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: W0216 00:10:20.037276 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod428b39f5_eb1c_4f65_b7a4_eeb6e84860cc.slice/crio-925f246613ed8bc0efb137afde67f86cf0329373c5692e7bc9a47b8b313430e3 WatchSource:0}: Error finding container 925f246613ed8bc0efb137afde67f86cf0329373c5692e7bc9a47b8b313430e3: Status 404 returned error can't find the container with id 925f246613ed8bc0efb137afde67f86cf0329373c5692e7bc9a47b8b313430e3 Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.040226 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.041729 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.042594 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.056733 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-zp67w" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.056715 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.066319 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-72dpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctpq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-72dpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: W0216 00:10:20.066518 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbb290fa_349e_4aa8_b21a_00ef48fba6e7.slice/crio-4c7e7fb5d0282f7e9844e77314d92afe4d744bde8d627051589ceb297563c1a2 WatchSource:0}: Error finding container 4c7e7fb5d0282f7e9844e77314d92afe4d744bde8d627051589ceb297563c1a2: Status 404 returned error can't find the container with id 4c7e7fb5d0282f7e9844e77314d92afe4d744bde8d627051589ceb297563c1a2 Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.068009 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.069810 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 16 00:10:20 crc kubenswrapper[5114]: set -uo pipefail Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 16 00:10:20 crc kubenswrapper[5114]: HOSTS_FILE="/etc/hosts" Feb 16 00:10:20 crc kubenswrapper[5114]: TEMP_FILE="/tmp/hosts.tmp" Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Make a temporary file with the old hosts file's attributes. Feb 16 00:10:20 crc kubenswrapper[5114]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 16 00:10:20 crc kubenswrapper[5114]: echo "Failed to preserve hosts file. Exiting." Feb 16 00:10:20 crc kubenswrapper[5114]: exit 1 Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: while true; do Feb 16 00:10:20 crc kubenswrapper[5114]: declare -A svc_ips Feb 16 00:10:20 crc kubenswrapper[5114]: for svc in "${services[@]}"; do Feb 16 00:10:20 crc kubenswrapper[5114]: # Fetch service IP from cluster dns if present. We make several tries Feb 16 00:10:20 crc kubenswrapper[5114]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 16 00:10:20 crc kubenswrapper[5114]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 16 00:10:20 crc kubenswrapper[5114]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 16 00:10:20 crc kubenswrapper[5114]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 16 00:10:20 crc kubenswrapper[5114]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 16 00:10:20 crc kubenswrapper[5114]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 16 00:10:20 crc kubenswrapper[5114]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 16 00:10:20 crc kubenswrapper[5114]: for i in ${!cmds[*]} Feb 16 00:10:20 crc kubenswrapper[5114]: do Feb 16 00:10:20 crc kubenswrapper[5114]: ips=($(eval "${cmds[i]}")) Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: svc_ips["${svc}"]="${ips[@]}" Feb 16 00:10:20 crc kubenswrapper[5114]: break Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Update /etc/hosts only if we get valid service IPs Feb 16 00:10:20 crc kubenswrapper[5114]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 16 00:10:20 crc kubenswrapper[5114]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 16 00:10:20 crc kubenswrapper[5114]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 16 00:10:20 crc kubenswrapper[5114]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 60 & wait Feb 16 00:10:20 crc kubenswrapper[5114]: continue Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Append resolver entries for services Feb 16 00:10:20 crc kubenswrapper[5114]: rc=0 Feb 16 00:10:20 crc kubenswrapper[5114]: for svc in "${!svc_ips[@]}"; do Feb 16 00:10:20 crc kubenswrapper[5114]: for ip in ${svc_ips[${svc}]}; do Feb 16 00:10:20 crc kubenswrapper[5114]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ $rc -ne 0 ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 60 & wait Feb 16 00:10:20 crc kubenswrapper[5114]: continue Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 16 00:10:20 crc kubenswrapper[5114]: # Replace /etc/hosts with our modified version if needed Feb 16 00:10:20 crc kubenswrapper[5114]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 16 00:10:20 crc kubenswrapper[5114]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 60 & wait Feb 16 00:10:20 crc kubenswrapper[5114]: unset svc_ips Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skmcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-zp67w_openshift-dns(cbb290fa-349e-4aa8-b21a-00ef48fba6e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.071011 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-zp67w" podUID="cbb290fa-349e-4aa8-b21a-00ef48fba6e7" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.072494 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.072542 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.072561 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.072589 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.072610 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.080016 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1062ad-2431-42c0-950b-f12aded97fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbb2f8f39b9f3bee939bb471570744d580cfdb439c253b8460cacbfda0adfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebf9c3d019e33707c276dab2a0fc3eded08e87049610ece88fb23aebc8fe70fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5dbac4f55a4e2c2f3e9685aef58c61e28ac3f768691715b8218f6a5c80dd6d81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.083715 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:10:20 crc kubenswrapper[5114]: W0216 00:10:20.088477 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b3c2120_6c92_4855_86fc_a08ba5b7f48c.slice/crio-9015acb8881104f78edb88af78faa0f1ff7c5e163a8507213340ebf1a7c54e64 WatchSource:0}: Error finding container 9015acb8881104f78edb88af78faa0f1ff7c5e163a8507213340ebf1a7c54e64: Status 404 returned error can't find the container with id 9015acb8881104f78edb88af78faa0f1ff7c5e163a8507213340ebf1a7c54e64 Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.092800 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 16 00:10:20 crc kubenswrapper[5114]: apiVersion: v1 Feb 16 00:10:20 crc kubenswrapper[5114]: clusters: Feb 16 00:10:20 crc kubenswrapper[5114]: - cluster: Feb 16 00:10:20 crc kubenswrapper[5114]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 16 00:10:20 crc kubenswrapper[5114]: server: https://api-int.crc.testing:6443 Feb 16 00:10:20 crc kubenswrapper[5114]: name: default-cluster Feb 16 00:10:20 crc kubenswrapper[5114]: contexts: Feb 16 00:10:20 crc kubenswrapper[5114]: - context: Feb 16 00:10:20 crc kubenswrapper[5114]: cluster: default-cluster Feb 16 00:10:20 crc kubenswrapper[5114]: namespace: default Feb 16 00:10:20 crc kubenswrapper[5114]: user: default-auth Feb 16 00:10:20 crc kubenswrapper[5114]: name: default-context Feb 16 00:10:20 crc kubenswrapper[5114]: current-context: default-context Feb 16 00:10:20 crc kubenswrapper[5114]: kind: Config Feb 16 00:10:20 crc kubenswrapper[5114]: preferences: {} Feb 16 00:10:20 crc kubenswrapper[5114]: users: Feb 16 00:10:20 crc kubenswrapper[5114]: - name: default-auth Feb 16 00:10:20 crc kubenswrapper[5114]: user: Feb 16 00:10:20 crc kubenswrapper[5114]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 16 00:10:20 crc kubenswrapper[5114]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 16 00:10:20 crc kubenswrapper[5114]: EOF Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxrth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9clwb_openshift-ovn-kubernetes(6b3c2120-6c92-4855-86fc-a08ba5b7f48c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.094773 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.097033 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e654f43c-5ba1-48a5-87ae-f6672304d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wlt2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: W0216 00:10:20.098965 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6929dc4_3c97_49e3_b4c6_cc35d5e7b917.slice/crio-bc62223fd73d44ab7d9b678d6f5463c60d1878ca737c68d042b512129a7720a9 WatchSource:0}: Error finding container bc62223fd73d44ab7d9b678d6f5463c60d1878ca737c68d042b512129a7720a9: Status 404 returned error can't find the container with id bc62223fd73d44ab7d9b678d6f5463c60d1878ca737c68d042b512129a7720a9 Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.102087 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42vvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-vp5kn_openshift-machine-config-operator(b6929dc4-3c97-49e3-b4c6-cc35d5e7b917): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.105068 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42vvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-vp5kn_openshift-machine-config-operator(b6929dc4-3c97-49e3-b4c6-cc35d5e7b917): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.105994 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-44hnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.106329 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.112956 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-72dpq" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.119838 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb4fab3c-e950-4dec-a922-1f9ca4612ef5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://288e2fbc2214d418ac3020d245ad8aaf063f8e63b8fb410077b4f83c7b0e8887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3bf4f94ba97d4ae528d0ebb96d364672d87f90e197fea356ea55ca938edadcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6f1dde85e03a42b4451963a332e5b67b46f9f2e20df9ff9d84072649ce88c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: W0216 00:10:20.125223 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda17caad8_b1e3_46bb_a3fe_843bba1b8f97.slice/crio-fe0e1e4a13facad1cef70b99cabfaff138e77620ba6e3befb3f91f6e1da1d60c WatchSource:0}: Error finding container fe0e1e4a13facad1cef70b99cabfaff138e77620ba6e3befb3f91f6e1da1d60c: Status 404 returned error can't find the container with id fe0e1e4a13facad1cef70b99cabfaff138e77620ba6e3befb3f91f6e1da1d60c Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.127878 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 16 00:10:20 crc kubenswrapper[5114]: while [ true ]; Feb 16 00:10:20 crc kubenswrapper[5114]: do Feb 16 00:10:20 crc kubenswrapper[5114]: for f in $(ls /tmp/serviceca); do Feb 16 00:10:20 crc kubenswrapper[5114]: echo $f Feb 16 00:10:20 crc kubenswrapper[5114]: ca_file_path="/tmp/serviceca/${f}" Feb 16 00:10:20 crc kubenswrapper[5114]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 16 00:10:20 crc kubenswrapper[5114]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 16 00:10:20 crc kubenswrapper[5114]: if [ -e "${reg_dir_path}" ]; then Feb 16 00:10:20 crc kubenswrapper[5114]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 16 00:10:20 crc kubenswrapper[5114]: else Feb 16 00:10:20 crc kubenswrapper[5114]: mkdir $reg_dir_path Feb 16 00:10:20 crc kubenswrapper[5114]: cp $ca_file_path $reg_dir_path/ca.crt Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: for d in $(ls /etc/docker/certs.d); do Feb 16 00:10:20 crc kubenswrapper[5114]: echo $d Feb 16 00:10:20 crc kubenswrapper[5114]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 16 00:10:20 crc kubenswrapper[5114]: reg_conf_path="/tmp/serviceca/${dp}" Feb 16 00:10:20 crc kubenswrapper[5114]: if [ ! -e "${reg_conf_path}" ]; then Feb 16 00:10:20 crc kubenswrapper[5114]: rm -rf /etc/docker/certs.d/$d Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 60 & wait ${!} Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctpq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-72dpq_openshift-image-registry(a17caad8-b1e3-46bb-a3fe-843bba1b8f97): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.129008 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-72dpq" podUID="a17caad8-b1e3-46bb-a3fe-843bba1b8f97" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.133277 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bba7bce0-0647-459f-b5c3-17499167a67e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://194a5bb705405e17e124fa501a1108736f68e3acb7d24b8735925b360887f0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.140745 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-5jlj6" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.147034 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.151605 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.153845 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:10:20 crc kubenswrapper[5114]: W0216 00:10:20.162499 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4627438_b1a6_4cc9_85f6_10e9dd97943b.slice/crio-8e87a533310493b5c3579c3dd0791ce264b61790371a2751d331e0cfc66aefe9 WatchSource:0}: Error finding container 8e87a533310493b5c3579c3dd0791ce264b61790371a2751d331e0cfc66aefe9: Status 404 returned error can't find the container with id 8e87a533310493b5c3579c3dd0791ce264b61790371a2751d331e0cfc66aefe9 Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.168707 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.175521 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.175578 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.175590 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.175629 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.175645 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.177595 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 16 00:10:20 crc kubenswrapper[5114]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 16 00:10:20 crc kubenswrapper[5114]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pq4ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-5jlj6_openshift-multus(c4627438-b1a6-4cc9-85f6-10e9dd97943b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.179450 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-5jlj6" podUID="c4627438-b1a6-4cc9-85f6-10e9dd97943b" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.183405 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2glh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wlt2s_openshift-multus(e654f43c-5ba1-48a5-87ae-f6672304d245): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.184648 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" podUID="e654f43c-5ba1-48a5-87ae-f6672304d245" Feb 16 00:10:20 crc kubenswrapper[5114]: W0216 00:10:20.188858 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a832ec7_da6a_4e0b_8b74_47f2038c0c13.slice/crio-892d2b4ac5d8bfb4f0f72f70eefb56d9ccaf4de7777ead9a2b067bdc2c88ae69 WatchSource:0}: Error finding container 892d2b4ac5d8bfb4f0f72f70eefb56d9ccaf4de7777ead9a2b067bdc2c88ae69: Status 404 returned error can't find the container with id 892d2b4ac5d8bfb4f0f72f70eefb56d9ccaf4de7777ead9a2b067bdc2c88ae69 Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.192269 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 16 00:10:20 crc kubenswrapper[5114]: set -euo pipefail Feb 16 00:10:20 crc kubenswrapper[5114]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 16 00:10:20 crc kubenswrapper[5114]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 16 00:10:20 crc kubenswrapper[5114]: # As the secret mount is optional we must wait for the files to be present. Feb 16 00:10:20 crc kubenswrapper[5114]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 16 00:10:20 crc kubenswrapper[5114]: TS=$(date +%s) Feb 16 00:10:20 crc kubenswrapper[5114]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 16 00:10:20 crc kubenswrapper[5114]: HAS_LOGGED_INFO=0 Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: log_missing_certs(){ Feb 16 00:10:20 crc kubenswrapper[5114]: CUR_TS=$(date +%s) Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 16 00:10:20 crc kubenswrapper[5114]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 16 00:10:20 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 16 00:10:20 crc kubenswrapper[5114]: HAS_LOGGED_INFO=1 Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: } Feb 16 00:10:20 crc kubenswrapper[5114]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 16 00:10:20 crc kubenswrapper[5114]: log_missing_certs Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 5 Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/kube-rbac-proxy \ Feb 16 00:10:20 crc kubenswrapper[5114]: --logtostderr \ Feb 16 00:10:20 crc kubenswrapper[5114]: --secure-listen-address=:9108 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --upstream=http://127.0.0.1:29108/ \ Feb 16 00:10:20 crc kubenswrapper[5114]: --tls-private-key-file=${TLS_PK} \ Feb 16 00:10:20 crc kubenswrapper[5114]: --tls-cert-file=${TLS_CERT} Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phgcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-44hnf_openshift-ovn-kubernetes(1a832ec7-da6a-4e0b-8b74-47f2038c0c13): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.195665 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: set -o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: source "/env/_master" Feb 16 00:10:20 crc kubenswrapper[5114]: set +o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # This is needed so that converting clusters from GA to TP Feb 16 00:10:20 crc kubenswrapper[5114]: # will rollout control plane pods as well Feb 16 00:10:20 crc kubenswrapper[5114]: network_segmentation_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "true" != "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: route_advertisements_enable_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Enable multi-network policy if configured (control-plane always full mode) Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_policy_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Enable admin network policy if configured (control-plane always full mode) Feb 16 00:10:20 crc kubenswrapper[5114]: admin_network_policy_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: if [ "shared" == "shared" ]; then Feb 16 00:10:20 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode shared" Feb 16 00:10:20 crc kubenswrapper[5114]: elif [ "shared" == "local" ]; then Feb 16 00:10:20 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode local" Feb 16 00:10:20 crc kubenswrapper[5114]: else Feb 16 00:10:20 crc kubenswrapper[5114]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 16 00:10:20 crc kubenswrapper[5114]: exit 1 Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/ovnkube \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-interconnect \ Feb 16 00:10:20 crc kubenswrapper[5114]: --init-cluster-manager "${K8S_NODE}" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 16 00:10:20 crc kubenswrapper[5114]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --metrics-bind-address "127.0.0.1:29108" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --metrics-enable-pprof \ Feb 16 00:10:20 crc kubenswrapper[5114]: --metrics-enable-config-duration \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ovn_v4_join_subnet_opt} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ovn_v6_join_subnet_opt} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${dns_name_resolver_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${persistent_ips_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${multi_network_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${network_segmentation_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${gateway_mode_flags} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${route_advertisements_enable_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${preconfigured_udn_addresses_enable_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-egress-ip=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-egress-firewall=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-egress-qos=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-egress-service=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-multicast \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-multi-external-gateway=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${multi_network_policy_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${admin_network_policy_enabled_flag} Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phgcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-44hnf_openshift-ovn-kubernetes(1a832ec7-da6a-4e0b-8b74-47f2038c0c13): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.196957 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.244664 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerStarted","Data":"8df1be26b01622611bb46cfa3fb4e192d9a86d7b995365651a51d94334d1108c"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.248178 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jlj6" event={"ID":"c4627438-b1a6-4cc9-85f6-10e9dd97943b","Type":"ContainerStarted","Data":"8e87a533310493b5c3579c3dd0791ce264b61790371a2751d331e0cfc66aefe9"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.248336 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2glh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-wlt2s_openshift-multus(e654f43c-5ba1-48a5-87ae-f6672304d245): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.249592 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" podUID="e654f43c-5ba1-48a5-87ae-f6672304d245" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.250174 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Feb 16 00:10:20 crc kubenswrapper[5114]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Feb 16 00:10:20 crc kubenswrapper[5114]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pq4ff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-5jlj6_openshift-multus(c4627438-b1a6-4cc9-85f6-10e9dd97943b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.251285 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-5jlj6" podUID="c4627438-b1a6-4cc9-85f6-10e9dd97943b" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.251499 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zp67w" event={"ID":"cbb290fa-349e-4aa8-b21a-00ef48fba6e7","Type":"ContainerStarted","Data":"4c7e7fb5d0282f7e9844e77314d92afe4d744bde8d627051589ceb297563c1a2"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.253475 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Feb 16 00:10:20 crc kubenswrapper[5114]: set -uo pipefail Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Feb 16 00:10:20 crc kubenswrapper[5114]: HOSTS_FILE="/etc/hosts" Feb 16 00:10:20 crc kubenswrapper[5114]: TEMP_FILE="/tmp/hosts.tmp" Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: IFS=', ' read -r -a services <<< "${SERVICES}" Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Make a temporary file with the old hosts file's attributes. Feb 16 00:10:20 crc kubenswrapper[5114]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Feb 16 00:10:20 crc kubenswrapper[5114]: echo "Failed to preserve hosts file. Exiting." Feb 16 00:10:20 crc kubenswrapper[5114]: exit 1 Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: while true; do Feb 16 00:10:20 crc kubenswrapper[5114]: declare -A svc_ips Feb 16 00:10:20 crc kubenswrapper[5114]: for svc in "${services[@]}"; do Feb 16 00:10:20 crc kubenswrapper[5114]: # Fetch service IP from cluster dns if present. We make several tries Feb 16 00:10:20 crc kubenswrapper[5114]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Feb 16 00:10:20 crc kubenswrapper[5114]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Feb 16 00:10:20 crc kubenswrapper[5114]: # support UDP loadbalancers and require reaching DNS through TCP. Feb 16 00:10:20 crc kubenswrapper[5114]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 16 00:10:20 crc kubenswrapper[5114]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 16 00:10:20 crc kubenswrapper[5114]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Feb 16 00:10:20 crc kubenswrapper[5114]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Feb 16 00:10:20 crc kubenswrapper[5114]: for i in ${!cmds[*]} Feb 16 00:10:20 crc kubenswrapper[5114]: do Feb 16 00:10:20 crc kubenswrapper[5114]: ips=($(eval "${cmds[i]}")) Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: svc_ips["${svc}"]="${ips[@]}" Feb 16 00:10:20 crc kubenswrapper[5114]: break Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Update /etc/hosts only if we get valid service IPs Feb 16 00:10:20 crc kubenswrapper[5114]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Feb 16 00:10:20 crc kubenswrapper[5114]: # Stale entries could exist in /etc/hosts if the service is deleted Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -n "${svc_ips[*]-}" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Feb 16 00:10:20 crc kubenswrapper[5114]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Feb 16 00:10:20 crc kubenswrapper[5114]: # Only continue rebuilding the hosts entries if its original content is preserved Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 60 & wait Feb 16 00:10:20 crc kubenswrapper[5114]: continue Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Append resolver entries for services Feb 16 00:10:20 crc kubenswrapper[5114]: rc=0 Feb 16 00:10:20 crc kubenswrapper[5114]: for svc in "${!svc_ips[@]}"; do Feb 16 00:10:20 crc kubenswrapper[5114]: for ip in ${svc_ips[${svc}]}; do Feb 16 00:10:20 crc kubenswrapper[5114]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ $rc -ne 0 ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 60 & wait Feb 16 00:10:20 crc kubenswrapper[5114]: continue Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Feb 16 00:10:20 crc kubenswrapper[5114]: # Replace /etc/hosts with our modified version if needed Feb 16 00:10:20 crc kubenswrapper[5114]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Feb 16 00:10:20 crc kubenswrapper[5114]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 60 & wait Feb 16 00:10:20 crc kubenswrapper[5114]: unset svc_ips Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-skmcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-zp67w_openshift-dns(cbb290fa-349e-4aa8-b21a-00ef48fba6e7): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.255391 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-zp67w" podUID="cbb290fa-349e-4aa8-b21a-00ef48fba6e7" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.257121 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-72dpq" event={"ID":"a17caad8-b1e3-46bb-a3fe-843bba1b8f97","Type":"ContainerStarted","Data":"fe0e1e4a13facad1cef70b99cabfaff138e77620ba6e3befb3f91f6e1da1d60c"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.257751 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bba7bce0-0647-459f-b5c3-17499167a67e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://194a5bb705405e17e124fa501a1108736f68e3acb7d24b8735925b360887f0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.258849 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Feb 16 00:10:20 crc kubenswrapper[5114]: while [ true ]; Feb 16 00:10:20 crc kubenswrapper[5114]: do Feb 16 00:10:20 crc kubenswrapper[5114]: for f in $(ls /tmp/serviceca); do Feb 16 00:10:20 crc kubenswrapper[5114]: echo $f Feb 16 00:10:20 crc kubenswrapper[5114]: ca_file_path="/tmp/serviceca/${f}" Feb 16 00:10:20 crc kubenswrapper[5114]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Feb 16 00:10:20 crc kubenswrapper[5114]: reg_dir_path="/etc/docker/certs.d/${f}" Feb 16 00:10:20 crc kubenswrapper[5114]: if [ -e "${reg_dir_path}" ]; then Feb 16 00:10:20 crc kubenswrapper[5114]: cp -u $ca_file_path $reg_dir_path/ca.crt Feb 16 00:10:20 crc kubenswrapper[5114]: else Feb 16 00:10:20 crc kubenswrapper[5114]: mkdir $reg_dir_path Feb 16 00:10:20 crc kubenswrapper[5114]: cp $ca_file_path $reg_dir_path/ca.crt Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: for d in $(ls /etc/docker/certs.d); do Feb 16 00:10:20 crc kubenswrapper[5114]: echo $d Feb 16 00:10:20 crc kubenswrapper[5114]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Feb 16 00:10:20 crc kubenswrapper[5114]: reg_conf_path="/tmp/serviceca/${dp}" Feb 16 00:10:20 crc kubenswrapper[5114]: if [ ! -e "${reg_conf_path}" ]; then Feb 16 00:10:20 crc kubenswrapper[5114]: rm -rf /etc/docker/certs.d/$d Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 60 & wait ${!} Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ctpq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-72dpq_openshift-image-registry(a17caad8-b1e3-46bb-a3fe-843bba1b8f97): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.259973 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-72dpq" podUID="a17caad8-b1e3-46bb-a3fe-843bba1b8f97" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.260031 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"9015acb8881104f78edb88af78faa0f1ff7c5e163a8507213340ebf1a7c54e64"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.261863 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"bc62223fd73d44ab7d9b678d6f5463c60d1878ca737c68d042b512129a7720a9"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.261938 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Feb 16 00:10:20 crc kubenswrapper[5114]: apiVersion: v1 Feb 16 00:10:20 crc kubenswrapper[5114]: clusters: Feb 16 00:10:20 crc kubenswrapper[5114]: - cluster: Feb 16 00:10:20 crc kubenswrapper[5114]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Feb 16 00:10:20 crc kubenswrapper[5114]: server: https://api-int.crc.testing:6443 Feb 16 00:10:20 crc kubenswrapper[5114]: name: default-cluster Feb 16 00:10:20 crc kubenswrapper[5114]: contexts: Feb 16 00:10:20 crc kubenswrapper[5114]: - context: Feb 16 00:10:20 crc kubenswrapper[5114]: cluster: default-cluster Feb 16 00:10:20 crc kubenswrapper[5114]: namespace: default Feb 16 00:10:20 crc kubenswrapper[5114]: user: default-auth Feb 16 00:10:20 crc kubenswrapper[5114]: name: default-context Feb 16 00:10:20 crc kubenswrapper[5114]: current-context: default-context Feb 16 00:10:20 crc kubenswrapper[5114]: kind: Config Feb 16 00:10:20 crc kubenswrapper[5114]: preferences: {} Feb 16 00:10:20 crc kubenswrapper[5114]: users: Feb 16 00:10:20 crc kubenswrapper[5114]: - name: default-auth Feb 16 00:10:20 crc kubenswrapper[5114]: user: Feb 16 00:10:20 crc kubenswrapper[5114]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 16 00:10:20 crc kubenswrapper[5114]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Feb 16 00:10:20 crc kubenswrapper[5114]: EOF Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxrth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-9clwb_openshift-ovn-kubernetes(6b3c2120-6c92-4855-86fc-a08ba5b7f48c): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.263906 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.264216 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42vvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-vp5kn_openshift-machine-config-operator(b6929dc4-3c97-49e3-b4c6-cc35d5e7b917): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.265455 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"925f246613ed8bc0efb137afde67f86cf0329373c5692e7bc9a47b8b313430e3"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.266638 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.267030 5114 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-42vvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-vp5kn_openshift-machine-config-operator(b6929dc4-3c97-49e3-b4c6-cc35d5e7b917): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.267585 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"a8d292cfec99dcaabe9795987da7c2cdc1c013af4e26465407dcf8664959d52c"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.268101 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.268180 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.268864 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: set -o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: source "/env/_master" Feb 16 00:10:20 crc kubenswrapper[5114]: set +o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 16 00:10:20 crc kubenswrapper[5114]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 16 00:10:20 crc kubenswrapper[5114]: ho_enable="--enable-hybrid-overlay" Feb 16 00:10:20 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 16 00:10:20 crc kubenswrapper[5114]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 16 00:10:20 crc kubenswrapper[5114]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --webhook-host=127.0.0.1 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --webhook-port=9743 \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ho_enable} \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-interconnect \ Feb 16 00:10:20 crc kubenswrapper[5114]: --disable-approver \ Feb 16 00:10:20 crc kubenswrapper[5114]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --wait-for-kubernetes-api=200s \ Feb 16 00:10:20 crc kubenswrapper[5114]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.270516 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.273036 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: set -o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: source "/env/_master" Feb 16 00:10:20 crc kubenswrapper[5114]: set +o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --disable-webhook \ Feb 16 00:10:20 crc kubenswrapper[5114]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --loglevel="${LOGLEVEL}" Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.273184 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"f2fbddd55bc18d718c300fbfd49da4db2d670ff6d582b42a060b6af67200f059"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.274154 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.274984 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" event={"ID":"1a832ec7-da6a-4e0b-8b74-47f2038c0c13","Type":"ContainerStarted","Data":"892d2b4ac5d8bfb4f0f72f70eefb56d9ccaf4de7777ead9a2b067bdc2c88ae69"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.276710 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Feb 16 00:10:20 crc kubenswrapper[5114]: set -euo pipefail Feb 16 00:10:20 crc kubenswrapper[5114]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Feb 16 00:10:20 crc kubenswrapper[5114]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Feb 16 00:10:20 crc kubenswrapper[5114]: # As the secret mount is optional we must wait for the files to be present. Feb 16 00:10:20 crc kubenswrapper[5114]: # The service is created in monitor.yaml and this is created in sdn.yaml. Feb 16 00:10:20 crc kubenswrapper[5114]: TS=$(date +%s) Feb 16 00:10:20 crc kubenswrapper[5114]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Feb 16 00:10:20 crc kubenswrapper[5114]: HAS_LOGGED_INFO=0 Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: log_missing_certs(){ Feb 16 00:10:20 crc kubenswrapper[5114]: CUR_TS=$(date +%s) Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Feb 16 00:10:20 crc kubenswrapper[5114]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Feb 16 00:10:20 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Feb 16 00:10:20 crc kubenswrapper[5114]: HAS_LOGGED_INFO=1 Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: } Feb 16 00:10:20 crc kubenswrapper[5114]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Feb 16 00:10:20 crc kubenswrapper[5114]: log_missing_certs Feb 16 00:10:20 crc kubenswrapper[5114]: sleep 5 Feb 16 00:10:20 crc kubenswrapper[5114]: done Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/kube-rbac-proxy \ Feb 16 00:10:20 crc kubenswrapper[5114]: --logtostderr \ Feb 16 00:10:20 crc kubenswrapper[5114]: --secure-listen-address=:9108 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Feb 16 00:10:20 crc kubenswrapper[5114]: --upstream=http://127.0.0.1:29108/ \ Feb 16 00:10:20 crc kubenswrapper[5114]: --tls-private-key-file=${TLS_PK} \ Feb 16 00:10:20 crc kubenswrapper[5114]: --tls-cert-file=${TLS_CERT} Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phgcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-44hnf_openshift-ovn-kubernetes(1a832ec7-da6a-4e0b-8b74-47f2038c0c13): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.278115 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.278135 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.278145 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.278159 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.278168 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.278585 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Feb 16 00:10:20 crc kubenswrapper[5114]: set -o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: source /etc/kubernetes/apiserver-url.env Feb 16 00:10:20 crc kubenswrapper[5114]: else Feb 16 00:10:20 crc kubenswrapper[5114]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 16 00:10:20 crc kubenswrapper[5114]: exit 1 Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.279508 5114 kuberuntime_manager.go:1358] "Unhandled Error" err=< Feb 16 00:10:20 crc kubenswrapper[5114]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ -f "/env/_master" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: set -o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: source "/env/_master" Feb 16 00:10:20 crc kubenswrapper[5114]: set +o allexport Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "" != "" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: persistent_ips_enabled_flag="--enable-persistent-ips" Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # This is needed so that converting clusters from GA to TP Feb 16 00:10:20 crc kubenswrapper[5114]: # will rollout control plane pods as well Feb 16 00:10:20 crc kubenswrapper[5114]: network_segmentation_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "true" != "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_enabled_flag="--enable-multi-network" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: network_segmentation_enabled_flag="--enable-network-segmentation" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: route_advertisements_enable_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: route_advertisements_enable_flag="--enable-route-advertisements" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Enable multi-network policy if configured (control-plane always full mode) Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_policy_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "false" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: # Enable admin network policy if configured (control-plane always full mode) Feb 16 00:10:20 crc kubenswrapper[5114]: admin_network_policy_enabled_flag= Feb 16 00:10:20 crc kubenswrapper[5114]: if [[ "true" == "true" ]]; then Feb 16 00:10:20 crc kubenswrapper[5114]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: if [ "shared" == "shared" ]; then Feb 16 00:10:20 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode shared" Feb 16 00:10:20 crc kubenswrapper[5114]: elif [ "shared" == "local" ]; then Feb 16 00:10:20 crc kubenswrapper[5114]: gateway_mode_flags="--gateway-mode local" Feb 16 00:10:20 crc kubenswrapper[5114]: else Feb 16 00:10:20 crc kubenswrapper[5114]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Feb 16 00:10:20 crc kubenswrapper[5114]: exit 1 Feb 16 00:10:20 crc kubenswrapper[5114]: fi Feb 16 00:10:20 crc kubenswrapper[5114]: Feb 16 00:10:20 crc kubenswrapper[5114]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Feb 16 00:10:20 crc kubenswrapper[5114]: exec /usr/bin/ovnkube \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-interconnect \ Feb 16 00:10:20 crc kubenswrapper[5114]: --init-cluster-manager "${K8S_NODE}" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --config-file=/run/ovnkube-config/ovnkube.conf \ Feb 16 00:10:20 crc kubenswrapper[5114]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --metrics-bind-address "127.0.0.1:29108" \ Feb 16 00:10:20 crc kubenswrapper[5114]: --metrics-enable-pprof \ Feb 16 00:10:20 crc kubenswrapper[5114]: --metrics-enable-config-duration \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ovn_v4_join_subnet_opt} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ovn_v6_join_subnet_opt} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ovn_v4_transit_switch_subnet_opt} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${ovn_v6_transit_switch_subnet_opt} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${dns_name_resolver_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${persistent_ips_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${multi_network_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${network_segmentation_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${gateway_mode_flags} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${route_advertisements_enable_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${preconfigured_udn_addresses_enable_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-egress-ip=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-egress-firewall=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-egress-qos=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-egress-service=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-multicast \ Feb 16 00:10:20 crc kubenswrapper[5114]: --enable-multi-external-gateway=true \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${multi_network_policy_enabled_flag} \ Feb 16 00:10:20 crc kubenswrapper[5114]: ${admin_network_policy_enabled_flag} Feb 16 00:10:20 crc kubenswrapper[5114]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-phgcx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-44hnf_openshift-ovn-kubernetes(1a832ec7-da6a-4e0b-8b74-47f2038c0c13): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 16 00:10:20 crc kubenswrapper[5114]: > logger="UnhandledError" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.279894 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.280491 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.280950 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.292125 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36e77927-3498-4ebe-bcc5-62b9ddc1ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:09:56Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0216 00:09:56.366393 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 00:09:56.366553 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0216 00:09:56.367494 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1793456589/tls.crt::/tmp/serving-cert-1793456589/tls.key\\\\\\\"\\\\nI0216 00:09:56.738309 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 00:09:56.741479 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 00:09:56.741559 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 00:09:56.741646 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 00:09:56.741696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 00:09:56.750507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 00:09:56.750535 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 00:09:56.750564 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750574 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 00:09:56.750585 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 00:09:56.750589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 00:09:56.750593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 00:09:56.751514 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:09:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.303288 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.314108 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vp5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.321127 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.321198 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.321324 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.321352 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.321429 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.321501 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:21.321469476 +0000 UTC m=+97.702746294 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.321504 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.321529 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:21.321516747 +0000 UTC m=+97.702793565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.321544 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.321565 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.321651 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:21.321624021 +0000 UTC m=+97.702900879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.321917 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vk5fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6149fdd-e85e-41f7-b50a-76f70c153c44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vk5fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.332107 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-5jlj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4627438-b1a6-4cc9-85f6-10e9dd97943b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq4ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jlj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.339550 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zp67w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-skmcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zp67w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.353159 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9clwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.379560 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"764b478d-1d01-4d84-b45d-6590a38497c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b10c64884bbd71e2157b1670c58209bda6bd063665c1ac3d058e91ad3a7fc7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://58ea7cf355069731d736ded1f9a033e00b7f747f4a993b9d00516ab40c56d783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://05b2d05490e4cfff0b22711d5a8c00f6728fa0e633a8b993400a629d4424fb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://33765468880ba21c7b0362a460e75d6e28decbeb2daa74e65202f1e4ac174738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc05bbf6d8b5e02515a1cbcd8639ce40b8118b0262ad8073c708dfa30ba9a54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.381032 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.381094 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.381108 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.381157 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.381172 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.389781 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.398320 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.410480 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.422960 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.423117 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.423277 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.423390 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs podName:d6149fdd-e85e-41f7-b50a-76f70c153c44 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:21.423370339 +0000 UTC m=+97.804647157 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs") pod "network-metrics-daemon-vk5fl" (UID: "d6149fdd-e85e-41f7-b50a-76f70c153c44") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.423514 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.423577 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.423634 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.424149 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:21.424138421 +0000 UTC m=+97.805415239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.439073 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-72dpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctpq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-72dpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.485054 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.485120 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.485142 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.485173 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.485196 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.486692 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1062ad-2431-42c0-950b-f12aded97fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbb2f8f39b9f3bee939bb471570744d580cfdb439c253b8460cacbfda0adfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebf9c3d019e33707c276dab2a0fc3eded08e87049610ece88fb23aebc8fe70fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5dbac4f55a4e2c2f3e9685aef58c61e28ac3f768691715b8218f6a5c80dd6d81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.523442 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e654f43c-5ba1-48a5-87ae-f6672304d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wlt2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.525847 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.526127 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:10:21.526087234 +0000 UTC m=+97.907364092 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.563435 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-44hnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.588324 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.588707 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.588974 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.589189 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.589301 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.601782 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb4fab3c-e950-4dec-a922-1f9ca4612ef5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://288e2fbc2214d418ac3020d245ad8aaf063f8e63b8fb410077b4f83c7b0e8887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3bf4f94ba97d4ae528d0ebb96d364672d87f90e197fea356ea55ca938edadcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6f1dde85e03a42b4451963a332e5b67b46f9f2e20df9ff9d84072649ce88c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.647714 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.684332 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.692176 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.692320 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.692354 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.692391 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.692424 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.730331 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36e77927-3498-4ebe-bcc5-62b9ddc1ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:09:56Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0216 00:09:56.366393 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 00:09:56.366553 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0216 00:09:56.367494 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1793456589/tls.crt::/tmp/serving-cert-1793456589/tls.key\\\\\\\"\\\\nI0216 00:09:56.738309 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 00:09:56.741479 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 00:09:56.741559 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 00:09:56.741646 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 00:09:56.741696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 00:09:56.750507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 00:09:56.750535 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 00:09:56.750564 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750574 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 00:09:56.750585 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 00:09:56.750589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 00:09:56.750593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 00:09:56.751514 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:09:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.767476 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.795711 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.795876 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.795906 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.795943 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.795967 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.807769 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vp5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.816539 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:20 crc kubenswrapper[5114]: E0216 00:10:20.816761 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.844426 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vk5fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6149fdd-e85e-41f7-b50a-76f70c153c44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vk5fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.882981 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-5jlj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4627438-b1a6-4cc9-85f6-10e9dd97943b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq4ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jlj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.897928 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.898008 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.898036 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.898067 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.898092 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:20Z","lastTransitionTime":"2026-02-16T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.926552 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zp67w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-skmcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zp67w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:20 crc kubenswrapper[5114]: I0216 00:10:20.980457 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9clwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.001662 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.001755 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.001783 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.001822 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.001852 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.022074 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"764b478d-1d01-4d84-b45d-6590a38497c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b10c64884bbd71e2157b1670c58209bda6bd063665c1ac3d058e91ad3a7fc7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://58ea7cf355069731d736ded1f9a033e00b7f747f4a993b9d00516ab40c56d783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://05b2d05490e4cfff0b22711d5a8c00f6728fa0e633a8b993400a629d4424fb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://33765468880ba21c7b0362a460e75d6e28decbeb2daa74e65202f1e4ac174738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc05bbf6d8b5e02515a1cbcd8639ce40b8118b0262ad8073c708dfa30ba9a54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.045132 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.084728 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.105336 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.105393 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.105412 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.105442 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.105461 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.125095 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.160449 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-72dpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctpq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-72dpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.205187 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1062ad-2431-42c0-950b-f12aded97fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbb2f8f39b9f3bee939bb471570744d580cfdb439c253b8460cacbfda0adfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebf9c3d019e33707c276dab2a0fc3eded08e87049610ece88fb23aebc8fe70fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5dbac4f55a4e2c2f3e9685aef58c61e28ac3f768691715b8218f6a5c80dd6d81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.207935 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.208002 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.208029 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.208059 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.208083 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.246845 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e654f43c-5ba1-48a5-87ae-f6672304d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wlt2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.281910 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-44hnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.311984 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.312041 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.312059 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.312168 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.312190 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.322432 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb4fab3c-e950-4dec-a922-1f9ca4612ef5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://288e2fbc2214d418ac3020d245ad8aaf063f8e63b8fb410077b4f83c7b0e8887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3bf4f94ba97d4ae528d0ebb96d364672d87f90e197fea356ea55ca938edadcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6f1dde85e03a42b4451963a332e5b67b46f9f2e20df9ff9d84072649ce88c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.337930 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.338005 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.338123 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.338180 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.338307 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.338339 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:23.33830185 +0000 UTC m=+99.719578728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.338389 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:23.338365862 +0000 UTC m=+99.719642720 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.338351 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.338658 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.338683 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.338795 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:23.338770564 +0000 UTC m=+99.720047412 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.361367 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bba7bce0-0647-459f-b5c3-17499167a67e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://194a5bb705405e17e124fa501a1108736f68e3acb7d24b8735925b360887f0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.415427 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.415517 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.415542 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.415576 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.415600 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.439496 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.439604 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.439725 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.439764 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.439766 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.439799 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.439865 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs podName:d6149fdd-e85e-41f7-b50a-76f70c153c44 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:23.439841102 +0000 UTC m=+99.821117920 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs") pod "network-metrics-daemon-vk5fl" (UID: "d6149fdd-e85e-41f7-b50a-76f70c153c44") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.440725 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:23.440688857 +0000 UTC m=+99.821965705 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.517515 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.517575 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.517594 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.517617 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.517635 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.541213 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.541667 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:10:23.541626032 +0000 UTC m=+99.922902880 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.579323 5114 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.620236 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.620330 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.620351 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.620378 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.620397 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.723852 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.723920 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.723938 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.723965 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.723984 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.815929 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.816022 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.815950 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.816198 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.816410 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:21 crc kubenswrapper[5114]: E0216 00:10:21.816561 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.821323 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.822160 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.824636 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.826773 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.826823 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.826840 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.826868 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.826898 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.826921 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.830490 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.833220 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.834420 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.836122 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.836756 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.838403 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.840532 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.842741 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.843373 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.845896 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.846454 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.847331 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.848480 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.849827 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.851887 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.853304 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.854940 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.858070 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.859598 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.862585 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.864088 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.864981 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.867156 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.867850 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.870177 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.871121 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.872938 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.874267 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.876893 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.878758 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.879966 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.881018 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.882174 5114 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.882299 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.886222 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.888289 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.889954 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.891638 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.892144 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.894043 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.894944 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.896529 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.898512 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.902075 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.903796 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.905036 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.906914 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.909382 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.911205 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.913676 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.915872 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.917204 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.918127 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.919910 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.929596 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.929649 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.929665 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.929684 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:21 crc kubenswrapper[5114]: I0216 00:10:21.929697 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:21Z","lastTransitionTime":"2026-02-16T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.032941 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.033018 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.033038 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.033069 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.033097 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.136003 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.136113 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.136165 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.136196 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.136215 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.238819 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.238885 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.238903 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.238927 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.238949 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.342028 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.342371 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.342497 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.342576 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.342634 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.446207 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.446311 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.446331 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.446359 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.446377 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.549416 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.549493 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.549506 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.549530 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.549545 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.652868 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.653277 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.653459 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.653661 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.653842 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.757008 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.757128 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.757154 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.757186 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.757208 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.816859 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:22 crc kubenswrapper[5114]: E0216 00:10:22.817511 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.859964 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.860358 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.860558 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.860768 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.861143 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.964972 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.965332 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.965493 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.965674 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:22 crc kubenswrapper[5114]: I0216 00:10:22.965830 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:22Z","lastTransitionTime":"2026-02-16T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.068930 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.069019 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.069040 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.069073 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.069100 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.172773 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.173130 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.173317 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.173453 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.173588 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.276585 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.276668 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.276691 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.276722 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.276746 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.367157 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.367230 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.367380 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.367649 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.367689 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.367685 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.367708 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.367954 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:27.367866751 +0000 UTC m=+103.749143609 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.368041 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:27.368014766 +0000 UTC m=+103.749291614 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.368311 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.368607 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:27.368577052 +0000 UTC m=+103.749853920 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.380822 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.380899 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.380920 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.380951 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.380972 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.469144 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.469581 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.469638 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.469661 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.469749 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.469780 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:27.469751324 +0000 UTC m=+103.851028172 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.469983 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.470155 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs podName:d6149fdd-e85e-41f7-b50a-76f70c153c44 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:27.470121115 +0000 UTC m=+103.851397973 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs") pod "network-metrics-daemon-vk5fl" (UID: "d6149fdd-e85e-41f7-b50a-76f70c153c44") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.484604 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.484685 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.484710 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.484759 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.484784 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.571228 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.571603 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:10:27.571536743 +0000 UTC m=+103.952813601 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.588342 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.588413 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.588432 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.588460 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.588479 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.691328 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.691392 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.691404 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.691424 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.691441 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.794162 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.794735 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.794757 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.794779 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.794794 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.816583 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.816759 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.817034 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.817588 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.817850 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:23 crc kubenswrapper[5114]: E0216 00:10:23.818684 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.898530 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.898610 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.898637 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.898673 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:23 crc kubenswrapper[5114]: I0216 00:10:23.898698 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:23Z","lastTransitionTime":"2026-02-16T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.002427 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.002521 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.002550 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.002588 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.002614 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.105198 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.105617 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.105716 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.105807 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.105880 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.208286 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.208634 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.208708 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.208807 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.208881 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.311570 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.312496 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.312584 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.312683 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.312831 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.415664 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.416030 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.416108 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.416172 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.416241 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.518230 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.518363 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.518382 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.518407 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.518426 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.621072 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.621139 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.621159 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.621182 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.621201 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.723780 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.723840 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.723864 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.723892 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.723913 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.815933 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:24 crc kubenswrapper[5114]: E0216 00:10:24.816168 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.825798 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.825858 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.825880 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.825906 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.825926 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.928443 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.928519 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.928547 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.928577 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:24 crc kubenswrapper[5114]: I0216 00:10:24.928601 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:24Z","lastTransitionTime":"2026-02-16T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.030721 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.030804 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.030890 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.030919 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.030940 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.133993 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.134083 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.134127 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.134163 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.134208 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.237842 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.237907 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.237925 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.237949 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.237970 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.341422 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.341466 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.341476 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.341490 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.341505 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.443902 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.444324 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.444397 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.444543 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.444618 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.547644 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.548361 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.548431 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.548499 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.548611 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.651313 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.651369 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.651382 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.651406 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.651418 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.753669 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.753723 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.753734 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.753751 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.753764 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.816482 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.816485 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:25 crc kubenswrapper[5114]: E0216 00:10:25.816678 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:25 crc kubenswrapper[5114]: E0216 00:10:25.816845 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.816889 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:25 crc kubenswrapper[5114]: E0216 00:10:25.816953 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.831368 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36e77927-3498-4ebe-bcc5-62b9ddc1ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:09:56Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0216 00:09:56.366393 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 00:09:56.366553 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0216 00:09:56.367494 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1793456589/tls.crt::/tmp/serving-cert-1793456589/tls.key\\\\\\\"\\\\nI0216 00:09:56.738309 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 00:09:56.741479 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 00:09:56.741559 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 00:09:56.741646 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 00:09:56.741696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 00:09:56.750507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 00:09:56.750535 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 00:09:56.750564 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750574 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 00:09:56.750585 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 00:09:56.750589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 00:09:56.750593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 00:09:56.751514 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:09:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.843511 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.853633 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vp5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.856525 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.856584 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.856600 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.856618 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.856629 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.862710 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vk5fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6149fdd-e85e-41f7-b50a-76f70c153c44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vk5fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.873141 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-5jlj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4627438-b1a6-4cc9-85f6-10e9dd97943b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq4ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jlj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.882912 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zp67w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-skmcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zp67w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.901176 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9clwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.929319 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"764b478d-1d01-4d84-b45d-6590a38497c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b10c64884bbd71e2157b1670c58209bda6bd063665c1ac3d058e91ad3a7fc7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://58ea7cf355069731d736ded1f9a033e00b7f747f4a993b9d00516ab40c56d783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://05b2d05490e4cfff0b22711d5a8c00f6728fa0e633a8b993400a629d4424fb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://33765468880ba21c7b0362a460e75d6e28decbeb2daa74e65202f1e4ac174738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc05bbf6d8b5e02515a1cbcd8639ce40b8118b0262ad8073c708dfa30ba9a54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.940572 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.951595 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.959318 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.959384 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.959398 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.959422 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.959437 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.962892 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.973831 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-72dpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctpq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-72dpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.974218 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.974321 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.974342 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.974373 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.974394 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.990545 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1062ad-2431-42c0-950b-f12aded97fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbb2f8f39b9f3bee939bb471570744d580cfdb439c253b8460cacbfda0adfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebf9c3d019e33707c276dab2a0fc3eded08e87049610ece88fb23aebc8fe70fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5dbac4f55a4e2c2f3e9685aef58c61e28ac3f768691715b8218f6a5c80dd6d81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: E0216 00:10:25.990627 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.995633 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.995672 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.995685 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.995706 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:25 crc kubenswrapper[5114]: I0216 00:10:25.995720 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:25Z","lastTransitionTime":"2026-02-16T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: E0216 00:10:26.006943 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.010821 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e654f43c-5ba1-48a5-87ae-f6672304d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wlt2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.012163 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.012256 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.012273 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.012294 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.012310 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: E0216 00:10:26.023997 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.024285 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-44hnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.028982 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.029027 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.029040 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.029057 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.029074 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.041403 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb4fab3c-e950-4dec-a922-1f9ca4612ef5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://288e2fbc2214d418ac3020d245ad8aaf063f8e63b8fb410077b4f83c7b0e8887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3bf4f94ba97d4ae528d0ebb96d364672d87f90e197fea356ea55ca938edadcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6f1dde85e03a42b4451963a332e5b67b46f9f2e20df9ff9d84072649ce88c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: E0216 00:10:26.041903 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.045804 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.045839 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.045852 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.045872 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.045886 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.053508 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bba7bce0-0647-459f-b5c3-17499167a67e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://194a5bb705405e17e124fa501a1108736f68e3acb7d24b8735925b360887f0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: E0216 00:10:26.056669 5114 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"97e4fb25-1ecb-4aec-afc8-32d47170a2de\\\",\\\"systemUUID\\\":\\\"22e33d55-d1b2-40e6-8445-92fd0fd602a7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: E0216 00:10:26.056875 5114 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.061333 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.061381 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.061394 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.061415 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.061428 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.068711 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.082610 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.163938 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.163985 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.163997 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.164015 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.164025 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.268935 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.269404 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.269603 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.269782 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.270369 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.373932 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.374562 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.374670 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.374765 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.374859 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.477888 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.478410 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.478568 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.478904 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.479102 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.582950 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.583066 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.583086 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.583115 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.583148 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.686507 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.686594 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.686614 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.686644 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.686664 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.788718 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.788772 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.788785 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.788805 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.788826 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.816633 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:26 crc kubenswrapper[5114]: E0216 00:10:26.816838 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.892116 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.892181 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.892193 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.892213 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.892228 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.994942 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.995002 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.995017 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.995042 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:26 crc kubenswrapper[5114]: I0216 00:10:26.995058 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:26Z","lastTransitionTime":"2026-02-16T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.097219 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.097295 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.097314 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.097335 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.097354 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.200575 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.200632 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.200655 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.200681 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.200701 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.302156 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.302216 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.302229 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.302286 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.302302 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.405027 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.405129 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.405153 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.405183 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.405206 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.418599 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.418642 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.418729 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.418768 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.418864 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.418879 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:35.418846157 +0000 UTC m=+111.800123015 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.418882 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.418904 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.418943 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:35.4189319 +0000 UTC m=+111.800208728 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.419038 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.419204 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:35.419168647 +0000 UTC m=+111.800445505 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.508293 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.508362 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.508376 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.508396 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.508412 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.519908 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.519961 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.520360 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.520357 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.520393 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.520625 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs podName:d6149fdd-e85e-41f7-b50a-76f70c153c44 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:35.520579325 +0000 UTC m=+111.901856293 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs") pod "network-metrics-daemon-vk5fl" (UID: "d6149fdd-e85e-41f7-b50a-76f70c153c44") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.520634 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.520764 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:35.52074319 +0000 UTC m=+111.902020188 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.611544 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.611617 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.611635 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.611663 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.611683 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.621178 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.621446 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:10:35.621418476 +0000 UTC m=+112.002695304 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.715001 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.715153 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.715177 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.715214 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.715237 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.816018 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.816229 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.816318 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.816227 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.816518 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:27 crc kubenswrapper[5114]: E0216 00:10:27.816718 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.824305 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.824413 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.824434 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.824469 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.824497 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.928040 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.928140 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.928162 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.928601 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:27 crc kubenswrapper[5114]: I0216 00:10:27.928945 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:27Z","lastTransitionTime":"2026-02-16T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.031701 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.031781 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.031800 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.031864 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.031883 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.134704 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.134783 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.134801 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.134823 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.134836 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.237565 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.237749 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.237767 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.237795 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.237813 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.341172 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.341276 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.341302 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.341332 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.341352 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.444242 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.444343 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.444362 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.444385 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.444409 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.547954 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.548048 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.548077 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.548114 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.548141 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.650829 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.650916 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.650941 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.650974 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.651000 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.753964 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.754036 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.754057 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.754083 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.754104 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.816526 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:28 crc kubenswrapper[5114]: E0216 00:10:28.816774 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.857007 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.857202 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.857225 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.857322 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.857348 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.960527 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.960606 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.960624 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.960653 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:28 crc kubenswrapper[5114]: I0216 00:10:28.960673 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:28Z","lastTransitionTime":"2026-02-16T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.063876 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.063958 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.063984 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.064014 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.064035 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.167411 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.167482 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.167499 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.167530 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.167557 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.270660 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.270744 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.270771 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.270795 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.270814 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.373932 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.374051 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.374113 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.374158 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.374182 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.476770 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.476833 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.476850 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.476872 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.476892 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.579938 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.580012 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.580035 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.580068 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.580089 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.683181 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.683292 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.683312 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.683337 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.683360 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.786703 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.786830 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.786851 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.786931 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.786955 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.816594 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.816620 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:29 crc kubenswrapper[5114]: E0216 00:10:29.816781 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.816799 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:29 crc kubenswrapper[5114]: E0216 00:10:29.817005 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:29 crc kubenswrapper[5114]: E0216 00:10:29.817681 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.893106 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.893212 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.893234 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.893287 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.893306 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.996341 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.996423 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.996461 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.996494 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:29 crc kubenswrapper[5114]: I0216 00:10:29.996517 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:29Z","lastTransitionTime":"2026-02-16T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.102711 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.102808 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.102833 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.102865 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.102891 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.206414 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.206494 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.206510 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.206529 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.206542 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.310898 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.310957 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.310969 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.310991 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.311004 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.412892 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.412977 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.413003 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.413040 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.413065 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.516113 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.516182 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.516201 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.516233 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.516283 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.619313 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.619407 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.619428 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.619460 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.619482 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.722283 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.722411 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.722447 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.722484 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.722509 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.816540 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:30 crc kubenswrapper[5114]: E0216 00:10:30.817137 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.818343 5114 scope.go:117] "RemoveContainer" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" Feb 16 00:10:30 crc kubenswrapper[5114]: E0216 00:10:30.818779 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.824802 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.824858 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.824881 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.824905 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.824926 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.928116 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.928219 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.928276 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.928314 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:30 crc kubenswrapper[5114]: I0216 00:10:30.928337 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:30Z","lastTransitionTime":"2026-02-16T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.030974 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.031036 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.031055 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.031080 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.031098 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.136461 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.136520 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.136534 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.136560 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.136577 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.240219 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.240297 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.240311 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.240334 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.240353 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.333882 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jlj6" event={"ID":"c4627438-b1a6-4cc9-85f6-10e9dd97943b","Type":"ContainerStarted","Data":"c83dc83d3735a8f6a2016857bcda28e79e5e7c3dc6e7dc96fdff987a03f69e42"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.340749 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"3943eb03152b2c4fc4e6371cf736ad9cef0161954a4fb2ae452834676cb01187"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.340830 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"cd7df8c0b36ef718c5f92c4eee398a362683f61f9ad5c6e433f716131ecc7c2e"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.342607 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.342657 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.342673 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.342695 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.342710 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.358772 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxrth\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-9clwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.381193 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"764b478d-1d01-4d84-b45d-6590a38497c1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://b10c64884bbd71e2157b1670c58209bda6bd063665c1ac3d058e91ad3a7fc7de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://58ea7cf355069731d736ded1f9a033e00b7f747f4a993b9d00516ab40c56d783\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://05b2d05490e4cfff0b22711d5a8c00f6728fa0e633a8b993400a629d4424fb55\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://33765468880ba21c7b0362a460e75d6e28decbeb2daa74e65202f1e4ac174738\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:51Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://cc05bbf6d8b5e02515a1cbcd8639ce40b8118b0262ad8073c708dfa30ba9a54d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:50Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://08b9ef6bebe0725db2e07ce676e32d1cc368ee337e7f0e4212ba78a5d4be836c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ea372fb2594d3b0941b4a745613161391e83e38a5e6aa02d2661f39ceb8ddbb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://219c97a30ace8cf7c014e206c0a6bd68aa31ee22bfc0361c4364a7bfa3a22493\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.398797 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.413443 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.433910 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.451630 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.451698 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.451717 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.451746 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.451694 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-72dpq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a17caad8-b1e3-46bb-a3fe-843bba1b8f97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ctpq7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-72dpq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.451765 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.473504 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1062ad-2431-42c0-950b-f12aded97fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbb2f8f39b9f3bee939bb471570744d580cfdb439c253b8460cacbfda0adfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebf9c3d019e33707c276dab2a0fc3eded08e87049610ece88fb23aebc8fe70fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5dbac4f55a4e2c2f3e9685aef58c61e28ac3f768691715b8218f6a5c80dd6d81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.488068 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e654f43c-5ba1-48a5-87ae-f6672304d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wlt2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.499651 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-44hnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.514767 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb4fab3c-e950-4dec-a922-1f9ca4612ef5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://288e2fbc2214d418ac3020d245ad8aaf063f8e63b8fb410077b4f83c7b0e8887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3bf4f94ba97d4ae528d0ebb96d364672d87f90e197fea356ea55ca938edadcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6f1dde85e03a42b4451963a332e5b67b46f9f2e20df9ff9d84072649ce88c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.522788 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bba7bce0-0647-459f-b5c3-17499167a67e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://194a5bb705405e17e124fa501a1108736f68e3acb7d24b8735925b360887f0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.535538 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.549515 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.554636 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.554697 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.554710 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.554732 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.554748 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.567309 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36e77927-3498-4ebe-bcc5-62b9ddc1ae34\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-16T00:09:56Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW0216 00:09:56.366393 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0216 00:09:56.366553 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0216 00:09:56.367494 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1793456589/tls.crt::/tmp/serving-cert-1793456589/tls.key\\\\\\\"\\\\nI0216 00:09:56.738309 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0216 00:09:56.741479 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0216 00:09:56.741559 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0216 00:09:56.741646 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0216 00:09:56.741696 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0216 00:09:56.750507 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0216 00:09:56.750535 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0216 00:09:56.750564 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750574 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0216 00:09:56.750580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0216 00:09:56.750585 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0216 00:09:56.750589 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0216 00:09:56.750593 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0216 00:09:56.751514 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-16T00:09:56Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:49Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.582523 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.593070 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-42vvz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-vp5kn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.604114 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-vk5fl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d6149fdd-e85e-41f7-b50a-76f70c153c44\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-thrjj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-vk5fl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.616441 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-5jlj6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c4627438-b1a6-4cc9-85f6-10e9dd97943b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"},\\\"containerID\\\":\\\"cri-o://c83dc83d3735a8f6a2016857bcda28e79e5e7c3dc6e7dc96fdff987a03f69e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"65Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:10:30Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pq4ff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-5jlj6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.625621 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-zp67w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cbb290fa-349e-4aa8-b21a-00ef48fba6e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-skmcq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-zp67w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.641139 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd1062ad-2431-42c0-950b-f12aded97fdf\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://cbb2f8f39b9f3bee939bb471570744d580cfdb439c253b8460cacbfda0adfbf4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://ebf9c3d019e33707c276dab2a0fc3eded08e87049610ece88fb23aebc8fe70fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5dbac4f55a4e2c2f3e9685aef58c61e28ac3f768691715b8218f6a5c80dd6d81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.658098 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.658176 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.658193 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.658216 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.658229 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.662533 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e654f43c-5ba1-48a5-87ae-f6672304d245\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2glh5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-wlt2s\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.673214 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:10:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phgcx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:10:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-44hnf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.681986 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb4fab3c-e950-4dec-a922-1f9ca4612ef5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:09:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://288e2fbc2214d418ac3020d245ad8aaf063f8e63b8fb410077b4f83c7b0e8887\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://3bf4f94ba97d4ae528d0ebb96d364672d87f90e197fea356ea55ca938edadcd0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a6f1dde85e03a42b4451963a332e5b67b46f9f2e20df9ff9d84072649ce88c2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cf173ac09d6e28fed57607d3c4548aef1f1d233a7b185920fb74f62ad43766b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.690301 5114 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bba7bce0-0647-459f-b5c3-17499167a67e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-16T00:08:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://194a5bb705405e17e124fa501a1108736f68e3acb7d24b8735925b360887f0a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-16T00:08:48Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c5f72d99acdd4f2140971a5ed9793c1b04b67047852255b8ce1e2e6519d1c25\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-16T00:08:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-16T00:08:47Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-16T00:08:45Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.760901 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.760970 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.760981 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.761002 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.761017 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.811389 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-5jlj6" podStartSLOduration=84.81136959 podStartE2EDuration="1m24.81136959s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:31.811269567 +0000 UTC m=+108.192546405" watchObservedRunningTime="2026-02-16 00:10:31.81136959 +0000 UTC m=+108.192646408" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.815839 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.815840 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.815876 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:31 crc kubenswrapper[5114]: E0216 00:10:31.816111 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:31 crc kubenswrapper[5114]: E0216 00:10:31.816805 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:31 crc kubenswrapper[5114]: E0216 00:10:31.816923 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.866875 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.868116 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.868133 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.868150 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.868163 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.902582 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=12.902563152 podStartE2EDuration="12.902563152s" podCreationTimestamp="2026-02-16 00:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:31.900990437 +0000 UTC m=+108.282267265" watchObservedRunningTime="2026-02-16 00:10:31.902563152 +0000 UTC m=+108.283839980" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.971071 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.971133 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.971145 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.971162 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:31 crc kubenswrapper[5114]: I0216 00:10:31.971173 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:31Z","lastTransitionTime":"2026-02-16T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.074055 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.074112 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.074128 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.074150 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.074162 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.176872 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.176925 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.176935 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.176955 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.176967 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.279920 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.279981 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.279999 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.280025 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.280044 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.349695 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" event={"ID":"1a832ec7-da6a-4e0b-8b74-47f2038c0c13","Type":"ContainerStarted","Data":"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.349771 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" event={"ID":"1a832ec7-da6a-4e0b-8b74-47f2038c0c13","Type":"ContainerStarted","Data":"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.382293 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.382412 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.382433 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.382458 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.382475 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.415717 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=13.415676721 podStartE2EDuration="13.415676721s" podCreationTimestamp="2026-02-16 00:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:32.385468316 +0000 UTC m=+108.766745204" watchObservedRunningTime="2026-02-16 00:10:32.415676721 +0000 UTC m=+108.796953569" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.432576 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" podStartSLOduration=84.43255418 podStartE2EDuration="1m24.43255418s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:32.431790518 +0000 UTC m=+108.813067336" watchObservedRunningTime="2026-02-16 00:10:32.43255418 +0000 UTC m=+108.813831038" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.474993 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=13.474963329 podStartE2EDuration="13.474963329s" podCreationTimestamp="2026-02-16 00:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:32.461127728 +0000 UTC m=+108.842404596" watchObservedRunningTime="2026-02-16 00:10:32.474963329 +0000 UTC m=+108.856240187" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.485105 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.485184 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.485209 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.485274 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.485294 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.588334 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.588404 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.588424 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.588449 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.588469 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.691490 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.691558 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.691577 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.691607 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.691626 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.793811 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.793863 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.793876 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.793893 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.793905 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.816817 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:32 crc kubenswrapper[5114]: E0216 00:10:32.816996 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.896350 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.896420 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.896457 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.896488 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.896505 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.999411 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.999476 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.999486 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:32 crc kubenswrapper[5114]: I0216 00:10:32.999506 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:32.999521 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:32Z","lastTransitionTime":"2026-02-16T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.102072 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.102147 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.102172 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.102205 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.102228 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.204525 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.204592 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.204614 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.204640 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.204660 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.308173 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.308343 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.308377 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.308415 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.308442 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.411879 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.412238 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.412417 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.412591 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.412730 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.515864 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.516212 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.516413 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.516561 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.516705 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.619494 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.619848 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.619978 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.620150 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.620316 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.723153 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.723215 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.723234 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.723286 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.723306 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.817035 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:33 crc kubenswrapper[5114]: E0216 00:10:33.817228 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.817706 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:33 crc kubenswrapper[5114]: E0216 00:10:33.817851 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.817950 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:33 crc kubenswrapper[5114]: E0216 00:10:33.818297 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.825521 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.825827 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.826046 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.826221 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.826474 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.931848 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.931917 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.931940 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.931968 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:33 crc kubenswrapper[5114]: I0216 00:10:33.931989 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:33Z","lastTransitionTime":"2026-02-16T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.035006 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.035064 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.035077 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.035098 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.035115 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.138107 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.138165 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.138179 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.138200 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.138213 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.242027 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.242104 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.242127 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.242153 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.242173 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.345347 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.345425 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.345444 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.345471 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.345492 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.359731 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"e4d29b737bd93fabe5a0cb0f0bddaa8ccd4e177fa6669463af19cde06bbcc8dc"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.388780 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=15.388750765 podStartE2EDuration="15.388750765s" podCreationTimestamp="2026-02-16 00:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:32.47429288 +0000 UTC m=+108.855569728" watchObservedRunningTime="2026-02-16 00:10:34.388750765 +0000 UTC m=+110.770027613" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.448823 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.449494 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.449519 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.449555 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.449578 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.552176 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.552280 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.552304 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.552332 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.552350 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.655662 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.655742 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.655762 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.655793 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.655811 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.759231 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.759709 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.759865 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.760701 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.760818 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.816641 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:34 crc kubenswrapper[5114]: E0216 00:10:34.818437 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.873925 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.873988 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.874005 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.874031 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.874048 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.976837 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.976896 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.976909 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.976932 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:34 crc kubenswrapper[5114]: I0216 00:10:34.976946 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:34Z","lastTransitionTime":"2026-02-16T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.079468 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.079520 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.079533 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.079552 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.079568 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.181837 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.181893 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.181905 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.181923 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.181937 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.286013 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.286400 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.286523 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.286620 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.286698 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.372720 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-72dpq" event={"ID":"a17caad8-b1e3-46bb-a3fe-843bba1b8f97","Type":"ContainerStarted","Data":"db1fc8215a00ec79abe67a5d338c1e6e37d7e08c60c720cae8c0b1329733e724"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.374909 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063" exitCode=0 Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.375059 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.377821 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"37770605b7f719b89f036526f8aea8559aa83e1d78c9396380993e3bb02b7994"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.377878 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"e129ae4ee7d3742ba2d538ce3a74a1fc75d899264cde2462cc24760ecb7481d2"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.379692 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerStarted","Data":"17ff9ab1f247542aba34d39d218759c5900e84ca5ba4f64ea844f7838a703d24"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.388963 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.389026 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.389040 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.389063 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.389080 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.395465 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-72dpq" podStartSLOduration=88.395443446 podStartE2EDuration="1m28.395443446s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:35.39454491 +0000 UTC m=+111.775821728" watchObservedRunningTime="2026-02-16 00:10:35.395443446 +0000 UTC m=+111.776720264" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.416167 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podStartSLOduration=88.416066743 podStartE2EDuration="1m28.416066743s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:35.412437228 +0000 UTC m=+111.793714086" watchObservedRunningTime="2026-02-16 00:10:35.416066743 +0000 UTC m=+111.797343601" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.431957 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.432025 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.432130 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.432235 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:51.432209071 +0000 UTC m=+127.813485929 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.432128 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.432378 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.432415 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.432439 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.432519 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:51.432495209 +0000 UTC m=+127.813772187 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.435138 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.435309 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:51.435279 +0000 UTC m=+127.816555858 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.491096 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.491159 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.491172 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.491190 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.491203 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.535263 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.535449 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs podName:d6149fdd-e85e-41f7-b50a-76f70c153c44 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:51.535423412 +0000 UTC m=+127.916700230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs") pod "network-metrics-daemon-vk5fl" (UID: "d6149fdd-e85e-41f7-b50a-76f70c153c44") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.535567 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.536151 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.536479 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.536506 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.536521 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.536562 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-16 00:10:51.536551764 +0000 UTC m=+127.917828582 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.593953 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.594023 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.594043 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.594069 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.594085 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.638104 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.638450 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:10:51.638425797 +0000 UTC m=+128.019702615 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.697124 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.697192 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.697209 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.697243 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.697303 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.800464 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.800539 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.800623 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.800655 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.800676 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.818771 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.818974 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.819239 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.820393 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.822914 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:35 crc kubenswrapper[5114]: E0216 00:10:35.823094 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.903665 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.903756 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.903774 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.903802 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:35 crc kubenswrapper[5114]: I0216 00:10:35.903821 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:35Z","lastTransitionTime":"2026-02-16T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.009391 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.009468 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.009481 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.009503 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.009515 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:36Z","lastTransitionTime":"2026-02-16T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.112331 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.112393 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.112408 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.112434 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.112455 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:36Z","lastTransitionTime":"2026-02-16T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.214780 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.214847 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.214860 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.214881 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.214894 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:36Z","lastTransitionTime":"2026-02-16T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.318099 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.318175 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.318196 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.318223 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.318268 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:36Z","lastTransitionTime":"2026-02-16T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.385023 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"82aa745f0a7e95befec995791a3b18b60459430cfd9485c9f5322e4dfc66a994"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.386839 5114 generic.go:358] "Generic (PLEG): container finished" podID="e654f43c-5ba1-48a5-87ae-f6672304d245" containerID="17ff9ab1f247542aba34d39d218759c5900e84ca5ba4f64ea844f7838a703d24" exitCode=0 Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.386906 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerDied","Data":"17ff9ab1f247542aba34d39d218759c5900e84ca5ba4f64ea844f7838a703d24"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.388869 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-zp67w" event={"ID":"cbb290fa-349e-4aa8-b21a-00ef48fba6e7","Type":"ContainerStarted","Data":"27f66a63f571842c82ff0bf7d815fc086e01c524f53e0c9a4e97ae3ec70107ce"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.418443 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.418488 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.418498 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.418514 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.418525 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:36Z","lastTransitionTime":"2026-02-16T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.418913 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.419014 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.419042 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.419103 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.419131 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.444096 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.444171 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.444192 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.444217 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.444229 5114 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-16T00:10:36Z","lastTransitionTime":"2026-02-16T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.477146 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-zp67w" podStartSLOduration=90.47712288 podStartE2EDuration="1m30.47712288s" podCreationTimestamp="2026-02-16 00:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:36.43538858 +0000 UTC m=+112.816665438" watchObservedRunningTime="2026-02-16 00:10:36.47712288 +0000 UTC m=+112.858399698" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.478240 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6"] Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.598134 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.601003 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.602743 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.602896 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.602981 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.763808 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ad970bd-13be-428b-a243-4c04468a30b7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.763848 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ad970bd-13be-428b-a243-4c04468a30b7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.763917 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7ad970bd-13be-428b-a243-4c04468a30b7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.763947 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7ad970bd-13be-428b-a243-4c04468a30b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.763967 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ad970bd-13be-428b-a243-4c04468a30b7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.784849 5114 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.794650 5114 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.816169 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:36 crc kubenswrapper[5114]: E0216 00:10:36.816342 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.864722 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ad970bd-13be-428b-a243-4c04468a30b7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.864852 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ad970bd-13be-428b-a243-4c04468a30b7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.864892 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7ad970bd-13be-428b-a243-4c04468a30b7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.864917 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7ad970bd-13be-428b-a243-4c04468a30b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.864967 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/7ad970bd-13be-428b-a243-4c04468a30b7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.865075 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ad970bd-13be-428b-a243-4c04468a30b7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.865308 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/7ad970bd-13be-428b-a243-4c04468a30b7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.866387 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7ad970bd-13be-428b-a243-4c04468a30b7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.884573 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ad970bd-13be-428b-a243-4c04468a30b7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.900126 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7ad970bd-13be-428b-a243-4c04468a30b7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-bszc6\" (UID: \"7ad970bd-13be-428b-a243-4c04468a30b7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: I0216 00:10:36.912596 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" Feb 16 00:10:36 crc kubenswrapper[5114]: W0216 00:10:36.939238 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ad970bd_13be_428b_a243_4c04468a30b7.slice/crio-00c68e5cb96a8815559174247bbffe93f5aef015ff9f9ffb002094181e814aad WatchSource:0}: Error finding container 00c68e5cb96a8815559174247bbffe93f5aef015ff9f9ffb002094181e814aad: Status 404 returned error can't find the container with id 00c68e5cb96a8815559174247bbffe93f5aef015ff9f9ffb002094181e814aad Feb 16 00:10:37 crc kubenswrapper[5114]: I0216 00:10:37.427646 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" event={"ID":"7ad970bd-13be-428b-a243-4c04468a30b7","Type":"ContainerStarted","Data":"00c68e5cb96a8815559174247bbffe93f5aef015ff9f9ffb002094181e814aad"} Feb 16 00:10:37 crc kubenswrapper[5114]: I0216 00:10:37.439140 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} Feb 16 00:10:37 crc kubenswrapper[5114]: I0216 00:10:37.816063 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:37 crc kubenswrapper[5114]: I0216 00:10:37.816131 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:37 crc kubenswrapper[5114]: E0216 00:10:37.816399 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:37 crc kubenswrapper[5114]: E0216 00:10:37.816617 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:37 crc kubenswrapper[5114]: I0216 00:10:37.816727 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:37 crc kubenswrapper[5114]: E0216 00:10:37.816922 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:38 crc kubenswrapper[5114]: I0216 00:10:38.445525 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" event={"ID":"7ad970bd-13be-428b-a243-4c04468a30b7","Type":"ContainerStarted","Data":"bbe34a5d19d6f5a76d6af72cd48287d9fe5d5197feeb27795a850b6cf9e64003"} Feb 16 00:10:38 crc kubenswrapper[5114]: I0216 00:10:38.450321 5114 generic.go:358] "Generic (PLEG): container finished" podID="e654f43c-5ba1-48a5-87ae-f6672304d245" containerID="6e76419cd4809177d6a36b80384423a2837cb15d2e79ff9916dfb3b58ceed6b2" exitCode=0 Feb 16 00:10:38 crc kubenswrapper[5114]: I0216 00:10:38.450474 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerDied","Data":"6e76419cd4809177d6a36b80384423a2837cb15d2e79ff9916dfb3b58ceed6b2"} Feb 16 00:10:38 crc kubenswrapper[5114]: I0216 00:10:38.478605 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-bszc6" podStartSLOduration=91.478574495 podStartE2EDuration="1m31.478574495s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:38.474374494 +0000 UTC m=+114.855651352" watchObservedRunningTime="2026-02-16 00:10:38.478574495 +0000 UTC m=+114.859851313" Feb 16 00:10:38 crc kubenswrapper[5114]: I0216 00:10:38.815872 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:38 crc kubenswrapper[5114]: E0216 00:10:38.816086 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:39 crc kubenswrapper[5114]: I0216 00:10:39.458704 5114 generic.go:358] "Generic (PLEG): container finished" podID="e654f43c-5ba1-48a5-87ae-f6672304d245" containerID="3438307e3692a106d6b23c769a8d34106dccd8c812e9615ab40e315f16943706" exitCode=0 Feb 16 00:10:39 crc kubenswrapper[5114]: I0216 00:10:39.458830 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerDied","Data":"3438307e3692a106d6b23c769a8d34106dccd8c812e9615ab40e315f16943706"} Feb 16 00:10:39 crc kubenswrapper[5114]: I0216 00:10:39.465324 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} Feb 16 00:10:39 crc kubenswrapper[5114]: I0216 00:10:39.815892 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:39 crc kubenswrapper[5114]: E0216 00:10:39.816083 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:39 crc kubenswrapper[5114]: I0216 00:10:39.816095 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:39 crc kubenswrapper[5114]: E0216 00:10:39.816242 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:39 crc kubenswrapper[5114]: I0216 00:10:39.816161 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:39 crc kubenswrapper[5114]: E0216 00:10:39.816368 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:40 crc kubenswrapper[5114]: I0216 00:10:40.473081 5114 generic.go:358] "Generic (PLEG): container finished" podID="e654f43c-5ba1-48a5-87ae-f6672304d245" containerID="1780ff5629e045ab5b33938b8104c2b29037c2f53b7a5d42b87d505f2e368014" exitCode=0 Feb 16 00:10:40 crc kubenswrapper[5114]: I0216 00:10:40.473177 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerDied","Data":"1780ff5629e045ab5b33938b8104c2b29037c2f53b7a5d42b87d505f2e368014"} Feb 16 00:10:40 crc kubenswrapper[5114]: I0216 00:10:40.816889 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:40 crc kubenswrapper[5114]: E0216 00:10:40.817096 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.480914 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerStarted","Data":"e8d02a08d651cbe14ea928308ac62844762b3561979ae029ac5c203ef8e45a18"} Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.486159 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerStarted","Data":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.486608 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.486640 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.486654 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.536426 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.537024 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.556360 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podStartSLOduration=94.556342151 podStartE2EDuration="1m34.556342151s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:41.553322784 +0000 UTC m=+117.934599642" watchObservedRunningTime="2026-02-16 00:10:41.556342151 +0000 UTC m=+117.937618979" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.815998 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.816038 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.816046 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:41 crc kubenswrapper[5114]: E0216 00:10:41.816214 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:41 crc kubenswrapper[5114]: E0216 00:10:41.816343 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:41 crc kubenswrapper[5114]: E0216 00:10:41.816845 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:41 crc kubenswrapper[5114]: I0216 00:10:41.817049 5114 scope.go:117] "RemoveContainer" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" Feb 16 00:10:42 crc kubenswrapper[5114]: I0216 00:10:42.816569 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:42 crc kubenswrapper[5114]: E0216 00:10:42.817170 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:43 crc kubenswrapper[5114]: I0216 00:10:43.508528 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 16 00:10:43 crc kubenswrapper[5114]: I0216 00:10:43.510655 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788"} Feb 16 00:10:43 crc kubenswrapper[5114]: I0216 00:10:43.512115 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:43 crc kubenswrapper[5114]: I0216 00:10:43.543744 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=24.543721979 podStartE2EDuration="24.543721979s" podCreationTimestamp="2026-02-16 00:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:43.543320448 +0000 UTC m=+119.924597346" watchObservedRunningTime="2026-02-16 00:10:43.543721979 +0000 UTC m=+119.924998807" Feb 16 00:10:43 crc kubenswrapper[5114]: I0216 00:10:43.816876 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:43 crc kubenswrapper[5114]: I0216 00:10:43.816994 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:43 crc kubenswrapper[5114]: I0216 00:10:43.817030 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:43 crc kubenswrapper[5114]: E0216 00:10:43.817113 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:43 crc kubenswrapper[5114]: E0216 00:10:43.817344 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:43 crc kubenswrapper[5114]: E0216 00:10:43.817508 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:44 crc kubenswrapper[5114]: I0216 00:10:44.517831 5114 generic.go:358] "Generic (PLEG): container finished" podID="e654f43c-5ba1-48a5-87ae-f6672304d245" containerID="e8d02a08d651cbe14ea928308ac62844762b3561979ae029ac5c203ef8e45a18" exitCode=0 Feb 16 00:10:44 crc kubenswrapper[5114]: I0216 00:10:44.517952 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerDied","Data":"e8d02a08d651cbe14ea928308ac62844762b3561979ae029ac5c203ef8e45a18"} Feb 16 00:10:44 crc kubenswrapper[5114]: I0216 00:10:44.816534 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:44 crc kubenswrapper[5114]: E0216 00:10:44.817001 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:45 crc kubenswrapper[5114]: I0216 00:10:45.077347 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vk5fl"] Feb 16 00:10:45 crc kubenswrapper[5114]: I0216 00:10:45.077482 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:45 crc kubenswrapper[5114]: E0216 00:10:45.077607 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:45 crc kubenswrapper[5114]: I0216 00:10:45.526104 5114 generic.go:358] "Generic (PLEG): container finished" podID="e654f43c-5ba1-48a5-87ae-f6672304d245" containerID="7ca500d7f59a5cc0d97b4e11cbe129f0befc7cf490776e7da4b148e660be4461" exitCode=0 Feb 16 00:10:45 crc kubenswrapper[5114]: I0216 00:10:45.526208 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerDied","Data":"7ca500d7f59a5cc0d97b4e11cbe129f0befc7cf490776e7da4b148e660be4461"} Feb 16 00:10:45 crc kubenswrapper[5114]: E0216 00:10:45.786332 5114 kubelet_node_status.go:509] "Node not becoming ready in time after startup" Feb 16 00:10:45 crc kubenswrapper[5114]: I0216 00:10:45.816914 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:45 crc kubenswrapper[5114]: I0216 00:10:45.817087 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:45 crc kubenswrapper[5114]: E0216 00:10:45.817459 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:45 crc kubenswrapper[5114]: E0216 00:10:45.817637 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:45 crc kubenswrapper[5114]: E0216 00:10:45.889299 5114 kubelet.go:3132] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 16 00:10:46 crc kubenswrapper[5114]: I0216 00:10:46.542690 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" event={"ID":"e654f43c-5ba1-48a5-87ae-f6672304d245","Type":"ContainerStarted","Data":"efbc58d1ffe9918750aa24464ea6bedb64cbcc15a3cf6c630fdbdf503514d5ae"} Feb 16 00:10:46 crc kubenswrapper[5114]: I0216 00:10:46.815975 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:46 crc kubenswrapper[5114]: I0216 00:10:46.816074 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:46 crc kubenswrapper[5114]: E0216 00:10:46.816304 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:46 crc kubenswrapper[5114]: E0216 00:10:46.816538 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:47 crc kubenswrapper[5114]: I0216 00:10:47.816275 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:47 crc kubenswrapper[5114]: E0216 00:10:47.816467 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:47 crc kubenswrapper[5114]: I0216 00:10:47.816581 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:47 crc kubenswrapper[5114]: E0216 00:10:47.816850 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:48 crc kubenswrapper[5114]: I0216 00:10:48.816881 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:48 crc kubenswrapper[5114]: I0216 00:10:48.816949 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:48 crc kubenswrapper[5114]: E0216 00:10:48.817127 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:48 crc kubenswrapper[5114]: E0216 00:10:48.817342 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:49 crc kubenswrapper[5114]: I0216 00:10:49.822735 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:49 crc kubenswrapper[5114]: I0216 00:10:49.822756 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:49 crc kubenswrapper[5114]: E0216 00:10:49.823219 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Feb 16 00:10:49 crc kubenswrapper[5114]: E0216 00:10:49.823350 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Feb 16 00:10:50 crc kubenswrapper[5114]: I0216 00:10:50.816679 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:50 crc kubenswrapper[5114]: I0216 00:10:50.816680 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:50 crc kubenswrapper[5114]: E0216 00:10:50.816862 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Feb 16 00:10:50 crc kubenswrapper[5114]: E0216 00:10:50.817025 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-vk5fl" podUID="d6149fdd-e85e-41f7-b50a-76f70c153c44" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.481510 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.481655 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.481700 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.481811 5114 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.481889 5114 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.481938 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:11:23.481910205 +0000 UTC m=+159.863187063 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.481813 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.481983 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.481992 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-02-16 00:11:23.481969267 +0000 UTC m=+159.863246125 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.482004 5114 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.482081 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-02-16 00:11:23.48206179 +0000 UTC m=+159.863338648 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.583359 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.583480 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.583635 5114 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.583780 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs podName:d6149fdd-e85e-41f7-b50a-76f70c153c44 nodeName:}" failed. No retries permitted until 2026-02-16 00:11:23.583741776 +0000 UTC m=+159.965018714 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs") pod "network-metrics-daemon-vk5fl" (UID: "d6149fdd-e85e-41f7-b50a-76f70c153c44") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.583797 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.583825 5114 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.583842 5114 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.583928 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-02-16 00:11:23.583901741 +0000 UTC m=+159.965178569 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.685131 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:51 crc kubenswrapper[5114]: E0216 00:10:51.685397 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:23.685365691 +0000 UTC m=+160.066642529 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.823699 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.823758 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.826989 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.830553 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.830707 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 16 00:10:51 crc kubenswrapper[5114]: I0216 00:10:51.830762 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 16 00:10:52 crc kubenswrapper[5114]: I0216 00:10:52.816721 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:10:52 crc kubenswrapper[5114]: I0216 00:10:52.816731 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:10:52 crc kubenswrapper[5114]: I0216 00:10:52.820882 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 16 00:10:52 crc kubenswrapper[5114]: I0216 00:10:52.831052 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 16 00:10:54 crc kubenswrapper[5114]: I0216 00:10:54.526574 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:10:54 crc kubenswrapper[5114]: I0216 00:10:54.564347 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wlt2s" podStartSLOduration=107.564322854 podStartE2EDuration="1m47.564322854s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:10:46.583578405 +0000 UTC m=+122.964855233" watchObservedRunningTime="2026-02-16 00:10:54.564322854 +0000 UTC m=+130.945599682" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.839792 5114 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.885098 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.894115 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-skdc2"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.894387 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.898979 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.900124 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.900746 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.900876 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.901313 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.901082 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-nhfsj"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.903174 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.904861 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.905670 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.911205 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.911798 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.928945 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.930734 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.931409 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.932069 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.932146 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.932494 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.933940 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.934015 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.934153 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-x9wkk"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.934442 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.934471 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.935200 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.935910 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.936261 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.936373 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.936460 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.936561 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.936672 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.936757 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-x9wkk" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.936774 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.936672 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.937311 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.940232 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2jwtw"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.941472 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.942296 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.942485 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-d8d6z"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.942774 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.942718 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.942992 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.943064 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.943175 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.943408 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.943638 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.944737 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-l8qvm"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.945579 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.947590 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.947607 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.947888 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.948019 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.948066 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.948191 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.948371 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.948602 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.948692 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.948755 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.948898 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.949039 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.949091 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.949143 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.949222 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.949902 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.950810 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.952931 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.953095 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.953393 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.953475 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.953580 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5n27w"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.953776 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.953883 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955145 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-image-import-ca\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955189 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955218 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-machine-approver-tls\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955257 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ed4f0e-0187-43d7-a456-eb14ee69d614-tmp\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955280 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955305 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-config\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955327 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955364 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955387 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bf7f\" (UniqueName: \"kubernetes.io/projected/24991a86-e06b-4e9e-8992-50fbe36dfe01-kube-api-access-9bf7f\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955407 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-encryption-config\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955431 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955456 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b25d038c-e025-44e6-8bf4-c0334cd5bab4-tmp\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955479 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955503 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ed4f0e-0187-43d7-a456-eb14ee69d614-serving-cert\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955524 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-encryption-config\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955544 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-auth-proxy-config\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955566 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcpfv\" (UniqueName: \"kubernetes.io/projected/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-kube-api-access-rcpfv\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955587 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-etcd-client\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955606 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4105502f-c677-4389-9d65-126fd4126663-audit-dir\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955629 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqj2g\" (UniqueName: \"kubernetes.io/projected/f47442a6-b454-45d5-8094-794e063f573d-kube-api-access-dqj2g\") pod \"downloads-747b44746d-x9wkk\" (UID: \"f47442a6-b454-45d5-8094-794e063f573d\") " pod="openshift-console/downloads-747b44746d-x9wkk" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955653 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-etcd-client\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955679 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955716 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955740 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-dir\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955761 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cpsq\" (UniqueName: \"kubernetes.io/projected/85ed4f0e-0187-43d7-a456-eb14ee69d614-kube-api-access-5cpsq\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955784 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955803 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-config\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955829 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955849 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4105502f-c677-4389-9d65-126fd4126663-node-pullsecrets\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955874 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-serving-cert\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955907 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-audit-policies\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955926 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-config\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955947 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hzgt\" (UniqueName: \"kubernetes.io/projected/4105502f-c677-4389-9d65-126fd4126663-kube-api-access-7hzgt\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955974 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-client-ca\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.955996 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956016 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956043 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-config\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956064 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-audit\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956090 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-serving-cert\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956113 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-policies\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956136 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-etcd-serving-ca\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956160 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rz2t\" (UniqueName: \"kubernetes.io/projected/14059e76-0bc1-4982-ad4f-3aa9254b420b-kube-api-access-9rz2t\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956189 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956210 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-client-ca\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956233 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b25d038c-e025-44e6-8bf4-c0334cd5bab4-serving-cert\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956281 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956307 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14059e76-0bc1-4982-ad4f-3aa9254b420b-audit-dir\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956330 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcmqs\" (UniqueName: \"kubernetes.io/projected/b25d038c-e025-44e6-8bf4-c0334cd5bab4-kube-api-access-kcmqs\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956387 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.956533 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.957130 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.957196 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.959282 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.961184 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-sl2nf"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.963677 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.963918 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-5wc8p"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.964965 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.968446 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.969257 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.971624 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29520000-tmdgt"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.974352 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.974429 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.974591 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.975408 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.975532 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.978076 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.983396 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h"] Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.984045 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.984548 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:56 crc kubenswrapper[5114]: I0216 00:10:56.999847 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.002002 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.002379 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.002457 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.004076 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.004322 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.006634 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.006959 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.007079 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.007327 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.007902 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.008020 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.008150 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.008468 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.008812 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.009420 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.009553 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.011764 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.012798 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.016021 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.016118 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.016148 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.018500 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.019152 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.020183 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.022336 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.023324 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.023424 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.023481 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.023885 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.023897 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.024110 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.024375 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.024495 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.025784 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.025868 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026196 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026412 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026484 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026611 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026644 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026743 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"pruner-dockercfg-rs58m\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026779 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026781 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026854 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026900 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.026904 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"serviceca\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.027335 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.027378 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.027551 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.027786 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.028148 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.028181 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.028306 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.035819 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.038761 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.039134 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.041100 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.041907 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.042124 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.042179 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-vdzjf"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.043529 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.043725 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.062138 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.063431 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.063785 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.065120 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.069341 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070552 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14059e76-0bc1-4982-ad4f-3aa9254b420b-audit-dir\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070593 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c476e668-a97b-4ce6-9eb1-d278b804cf1d-config\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070612 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-client\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070630 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e8f4d24-5c9f-4a63-8909-f38807a68a86-tmp\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070648 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kcmqs\" (UniqueName: \"kubernetes.io/projected/b25d038c-e025-44e6-8bf4-c0334cd5bab4-kube-api-access-kcmqs\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070666 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c476e668-a97b-4ce6-9eb1-d278b804cf1d-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070680 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070704 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8916fc5f-e3fa-4e47-af78-923d1cd35984-serving-cert\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070720 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e690d2a-4d5a-4d38-bf04-fe6951258527-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070744 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070761 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-image-import-ca\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070777 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsb7t\" (UniqueName: \"kubernetes.io/projected/e5d36493-e813-44ad-9206-003a1ed39135-kube-api-access-tsb7t\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070792 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmsrc\" (UniqueName: \"kubernetes.io/projected/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-kube-api-access-vmsrc\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070821 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070837 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57cff053-a179-4f6a-a38f-ddee39ec6c0b-metrics-tls\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070853 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-config\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070869 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f84bfa8-7177-4705-8591-f4e33059d290-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qqb9h\" (UID: \"5f84bfa8-7177-4705-8591-f4e33059d290\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070886 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-machine-approver-tls\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070902 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ed4f0e-0187-43d7-a456-eb14ee69d614-tmp\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070917 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-ca\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070932 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070949 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/6e8f4d24-5c9f-4a63-8909-f38807a68a86-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070964 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/44a23ff1-70d4-4f26-b405-486ec014bf36-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070978 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-serving-cert\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.070995 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071011 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ae73bd-df87-4388-876a-2ed38972eb2b-serving-cert\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071026 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-config\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071040 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-oauth-serving-cert\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071054 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtwx\" (UniqueName: \"kubernetes.io/projected/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-kube-api-access-9xtwx\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071071 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7dsr\" (UniqueName: \"kubernetes.io/projected/3067f2a2-db60-4372-88da-6d376071d340-kube-api-access-x7dsr\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071089 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-config\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071106 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071123 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6htrs\" (UniqueName: \"kubernetes.io/projected/5973ce7e-fa3d-45a5-9700-34e045a81edc-kube-api-access-6htrs\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071138 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3067f2a2-db60-4372-88da-6d376071d340-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071164 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071180 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9bf7f\" (UniqueName: \"kubernetes.io/projected/24991a86-e06b-4e9e-8992-50fbe36dfe01-kube-api-access-9bf7f\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071196 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-encryption-config\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071213 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071229 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-config\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071261 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tw44\" (UniqueName: \"kubernetes.io/projected/6e8f4d24-5c9f-4a63-8909-f38807a68a86-kube-api-access-4tw44\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071279 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57cff053-a179-4f6a-a38f-ddee39ec6c0b-tmp-dir\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071294 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-trusted-ca-bundle\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071312 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e690d2a-4d5a-4d38-bf04-fe6951258527-config\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071331 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b25d038c-e025-44e6-8bf4-c0334cd5bab4-tmp\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071346 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071370 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ed4f0e-0187-43d7-a456-eb14ee69d614-serving-cert\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071385 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-encryption-config\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071399 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-auth-proxy-config\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071414 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rcpfv\" (UniqueName: \"kubernetes.io/projected/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-kube-api-access-rcpfv\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071429 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-etcd-client\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071445 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4105502f-c677-4389-9d65-126fd4126663-audit-dir\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071461 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5973ce7e-fa3d-45a5-9700-34e045a81edc-config\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071477 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05370d66-0f2a-4733-9077-d916206c2b6e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071493 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071508 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmn9p\" (UniqueName: \"kubernetes.io/projected/a1459fc5-08d9-4442-ad34-0b310742cad4-kube-api-access-zmn9p\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071530 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dqj2g\" (UniqueName: \"kubernetes.io/projected/f47442a6-b454-45d5-8094-794e063f573d-kube-api-access-dqj2g\") pod \"downloads-747b44746d-x9wkk\" (UID: \"f47442a6-b454-45d5-8094-794e063f573d\") " pod="openshift-console/downloads-747b44746d-x9wkk" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071547 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbldb\" (UniqueName: \"kubernetes.io/projected/57cff053-a179-4f6a-a38f-ddee39ec6c0b-kube-api-access-hbldb\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071565 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8e690d2a-4d5a-4d38-bf04-fe6951258527-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071581 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-service-ca\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071597 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e8f4d24-5c9f-4a63-8909-f38807a68a86-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071612 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-oauth-config\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071628 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32f15e1a-44ae-483f-8b19-d92afee5fdcc-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071644 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32f15e1a-44ae-483f-8b19-d92afee5fdcc-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071658 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3067f2a2-db60-4372-88da-6d376071d340-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071679 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-etcd-client\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071694 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-tmp-dir\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071709 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-images\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071737 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071753 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89ae73bd-df87-4388-876a-2ed38972eb2b-trusted-ca\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071780 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz4ch\" (UniqueName: \"kubernetes.io/projected/05370d66-0f2a-4733-9077-d916206c2b6e-kube-api-access-vz4ch\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071796 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44a23ff1-70d4-4f26-b405-486ec014bf36-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071812 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32f15e1a-44ae-483f-8b19-d92afee5fdcc-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071832 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071848 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-serving-cert\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071864 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5973ce7e-fa3d-45a5-9700-34e045a81edc-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071881 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05370d66-0f2a-4733-9077-d916206c2b6e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071899 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwwlf\" (UniqueName: \"kubernetes.io/projected/bdc47cbe-a3d3-432a-b8bb-399a35be1822-kube-api-access-vwwlf\") pod \"image-pruner-29520000-tmdgt\" (UID: \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\") " pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071917 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-dir\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071933 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5cpsq\" (UniqueName: \"kubernetes.io/projected/85ed4f0e-0187-43d7-a456-eb14ee69d614-kube-api-access-5cpsq\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071951 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071966 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1459fc5-08d9-4442-ad34-0b310742cad4-serving-cert\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071983 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9rp\" (UniqueName: \"kubernetes.io/projected/5f84bfa8-7177-4705-8591-f4e33059d290-kube-api-access-tv9rp\") pod \"cluster-samples-operator-6b564684c8-qqb9h\" (UID: \"5f84bfa8-7177-4705-8591-f4e33059d290\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.071999 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8916fc5f-e3fa-4e47-af78-923d1cd35984-config\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.072323 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.072902 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/14059e76-0bc1-4982-ad4f-3aa9254b420b-audit-dir\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074201 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ed4f0e-0187-43d7-a456-eb14ee69d614-tmp\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074301 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-config\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074373 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074393 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4105502f-c677-4389-9d65-126fd4126663-node-pullsecrets\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074438 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e8f4d24-5c9f-4a63-8909-f38807a68a86-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074457 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xhsr\" (UniqueName: \"kubernetes.io/projected/d7dc7990-5b90-402e-b2bc-53d94e232af4-kube-api-access-7xhsr\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074479 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074503 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-serving-cert\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074521 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8916fc5f-e3fa-4e47-af78-923d1cd35984-kube-api-access\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074553 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-audit-policies\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074573 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-config\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074592 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7hzgt\" (UniqueName: \"kubernetes.io/projected/4105502f-c677-4389-9d65-126fd4126663-kube-api-access-7hzgt\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074609 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5d36493-e813-44ad-9206-003a1ed39135-available-featuregates\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074629 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e8f4d24-5c9f-4a63-8909-f38807a68a86-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074657 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-client-ca\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074675 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074693 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074718 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-config\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074737 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-667cz\" (UniqueName: \"kubernetes.io/projected/c476e668-a97b-4ce6-9eb1-d278b804cf1d-kube-api-access-667cz\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074754 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d36493-e813-44ad-9206-003a1ed39135-serving-cert\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074770 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-audit\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074787 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p77xz\" (UniqueName: \"kubernetes.io/projected/89ae73bd-df87-4388-876a-2ed38972eb2b-kube-api-access-p77xz\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074805 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-config\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074831 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-serving-cert\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074847 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44a23ff1-70d4-4f26-b405-486ec014bf36-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074864 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm2n5\" (UniqueName: \"kubernetes.io/projected/44a23ff1-70d4-4f26-b405-486ec014bf36-kube-api-access-qm2n5\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074882 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32f15e1a-44ae-483f-8b19-d92afee5fdcc-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074901 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-policies\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074918 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-etcd-serving-ca\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074937 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9rz2t\" (UniqueName: \"kubernetes.io/projected/14059e76-0bc1-4982-ad4f-3aa9254b420b-kube-api-access-9rz2t\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074954 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ae73bd-df87-4388-876a-2ed38972eb2b-config\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074981 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.074997 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-client-ca\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.075014 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdc47cbe-a3d3-432a-b8bb-399a35be1822-serviceca\") pod \"image-pruner-29520000-tmdgt\" (UID: \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\") " pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.075036 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b25d038c-e025-44e6-8bf4-c0334cd5bab4-serving-cert\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.075052 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8916fc5f-e3fa-4e47-af78-923d1cd35984-tmp-dir\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.075069 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e690d2a-4d5a-4d38-bf04-fe6951258527-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.075098 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.075116 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5973ce7e-fa3d-45a5-9700-34e045a81edc-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.075504 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b25d038c-e025-44e6-8bf4-c0334cd5bab4-tmp\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.075725 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.076552 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.076777 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.078655 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.079015 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.080469 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-config\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.080630 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.082587 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-image-import-ca\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.082673 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-dir\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.083210 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.083457 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.084522 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.084992 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-config\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.085208 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.085451 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.085678 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-etcd-serving-ca\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.085973 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-config\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.086613 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.086776 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.086907 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-encryption-config\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.087143 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-auth-proxy-config\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.087852 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.091089 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ed4f0e-0187-43d7-a456-eb14ee69d614-serving-cert\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.091506 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4105502f-c677-4389-9d65-126fd4126663-node-pullsecrets\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.094195 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.094757 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-audit-policies\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.094821 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/14059e76-0bc1-4982-ad4f-3aa9254b420b-trusted-ca-bundle\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.094898 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4105502f-c677-4389-9d65-126fd4126663-audit-dir\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.096440 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b25d038c-e025-44e6-8bf4-c0334cd5bab4-serving-cert\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.096858 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-policies\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.097078 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-client-ca\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.097210 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.097490 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.098686 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-audit\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.099579 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.099720 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-serving-cert\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.099722 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-client-ca\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.100725 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-serving-cert\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.101042 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4105502f-c677-4389-9d65-126fd4126663-config\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.101456 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.102603 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-encryption-config\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.102754 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-machine-approver-tls\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.103359 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.106738 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/14059e76-0bc1-4982-ad4f-3aa9254b420b-etcd-client\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.106755 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4105502f-c677-4389-9d65-126fd4126663-etcd-client\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.109828 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.111609 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.114201 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.116809 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.117563 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.117681 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.117709 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.117683 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.118101 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.120584 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.138936 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.140343 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-crpbt"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.141346 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.148211 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-nrsjt"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.148716 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.151762 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.152074 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.154771 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-42lxx"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.154973 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.156787 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.157799 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-skdc2"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.157823 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmt8j"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.158341 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.161270 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-nhfsj"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.161292 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-n7nf8"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.161367 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.167021 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-h8c98"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.167188 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170518 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170550 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170618 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170633 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29520000-tmdgt"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170648 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5n27w"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170659 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-d8d6z"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170693 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2jwtw"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170703 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170718 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170727 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170738 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-x9wkk"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170769 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-5wc8p"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170781 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170792 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170801 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170811 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170829 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170841 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.170943 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.171169 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-sl2nf"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.171191 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-hkwvd"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.174895 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.174931 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.174941 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.174949 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.174961 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zffmj"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.174983 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hkwvd" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176027 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-ca\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176056 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176077 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/6e8f4d24-5c9f-4a63-8909-f38807a68a86-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176099 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/44a23ff1-70d4-4f26-b405-486ec014bf36-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176324 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-serving-cert\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176494 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ae73bd-df87-4388-876a-2ed38972eb2b-serving-cert\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176510 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/6e8f4d24-5c9f-4a63-8909-f38807a68a86-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176520 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-config\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176552 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-oauth-serving-cert\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176577 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtwx\" (UniqueName: \"kubernetes.io/projected/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-kube-api-access-9xtwx\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176601 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x7dsr\" (UniqueName: \"kubernetes.io/projected/3067f2a2-db60-4372-88da-6d376071d340-kube-api-access-x7dsr\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176619 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176638 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6htrs\" (UniqueName: \"kubernetes.io/projected/5973ce7e-fa3d-45a5-9700-34e045a81edc-kube-api-access-6htrs\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176663 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3067f2a2-db60-4372-88da-6d376071d340-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176711 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-config\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176732 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4tw44\" (UniqueName: \"kubernetes.io/projected/6e8f4d24-5c9f-4a63-8909-f38807a68a86-kube-api-access-4tw44\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176755 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57cff053-a179-4f6a-a38f-ddee39ec6c0b-tmp-dir\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176773 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-trusted-ca-bundle\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176796 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e690d2a-4d5a-4d38-bf04-fe6951258527-config\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176827 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5973ce7e-fa3d-45a5-9700-34e045a81edc-config\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176845 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05370d66-0f2a-4733-9077-d916206c2b6e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176867 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176926 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zmn9p\" (UniqueName: \"kubernetes.io/projected/a1459fc5-08d9-4442-ad34-0b310742cad4-kube-api-access-zmn9p\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176952 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hbldb\" (UniqueName: \"kubernetes.io/projected/57cff053-a179-4f6a-a38f-ddee39ec6c0b-kube-api-access-hbldb\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176971 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8e690d2a-4d5a-4d38-bf04-fe6951258527-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.176988 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-service-ca\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177155 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e8f4d24-5c9f-4a63-8909-f38807a68a86-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177193 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-oauth-config\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177649 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3067f2a2-db60-4372-88da-6d376071d340-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177782 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32f15e1a-44ae-483f-8b19-d92afee5fdcc-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177819 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32f15e1a-44ae-483f-8b19-d92afee5fdcc-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177840 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3067f2a2-db60-4372-88da-6d376071d340-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177867 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-tmp-dir\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177888 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-images\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177919 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89ae73bd-df87-4388-876a-2ed38972eb2b-trusted-ca\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177957 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vz4ch\" (UniqueName: \"kubernetes.io/projected/05370d66-0f2a-4733-9077-d916206c2b6e-kube-api-access-vz4ch\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.177981 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44a23ff1-70d4-4f26-b405-486ec014bf36-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178002 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32f15e1a-44ae-483f-8b19-d92afee5fdcc-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178025 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-serving-cert\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178071 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5973ce7e-fa3d-45a5-9700-34e045a81edc-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178093 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05370d66-0f2a-4733-9077-d916206c2b6e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178115 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vwwlf\" (UniqueName: \"kubernetes.io/projected/bdc47cbe-a3d3-432a-b8bb-399a35be1822-kube-api-access-vwwlf\") pod \"image-pruner-29520000-tmdgt\" (UID: \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\") " pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178136 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178146 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57cff053-a179-4f6a-a38f-ddee39ec6c0b-tmp-dir\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178141 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1459fc5-08d9-4442-ad34-0b310742cad4-serving-cert\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178306 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tv9rp\" (UniqueName: \"kubernetes.io/projected/5f84bfa8-7177-4705-8591-f4e33059d290-kube-api-access-tv9rp\") pod \"cluster-samples-operator-6b564684c8-qqb9h\" (UID: \"5f84bfa8-7177-4705-8591-f4e33059d290\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178333 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8916fc5f-e3fa-4e47-af78-923d1cd35984-config\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178379 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e8f4d24-5c9f-4a63-8909-f38807a68a86-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178400 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7xhsr\" (UniqueName: \"kubernetes.io/projected/d7dc7990-5b90-402e-b2bc-53d94e232af4-kube-api-access-7xhsr\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178418 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178464 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8916fc5f-e3fa-4e47-af78-923d1cd35984-kube-api-access\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178502 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5d36493-e813-44ad-9206-003a1ed39135-available-featuregates\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178519 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e8f4d24-5c9f-4a63-8909-f38807a68a86-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178555 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-667cz\" (UniqueName: \"kubernetes.io/projected/c476e668-a97b-4ce6-9eb1-d278b804cf1d-kube-api-access-667cz\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178571 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d36493-e813-44ad-9206-003a1ed39135-serving-cert\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178592 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p77xz\" (UniqueName: \"kubernetes.io/projected/89ae73bd-df87-4388-876a-2ed38972eb2b-kube-api-access-p77xz\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178611 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-config\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178635 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44a23ff1-70d4-4f26-b405-486ec014bf36-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178655 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qm2n5\" (UniqueName: \"kubernetes.io/projected/44a23ff1-70d4-4f26-b405-486ec014bf36-kube-api-access-qm2n5\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178671 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32f15e1a-44ae-483f-8b19-d92afee5fdcc-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178691 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ae73bd-df87-4388-876a-2ed38972eb2b-config\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178717 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdc47cbe-a3d3-432a-b8bb-399a35be1822-serviceca\") pod \"image-pruner-29520000-tmdgt\" (UID: \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\") " pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178735 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8916fc5f-e3fa-4e47-af78-923d1cd35984-tmp-dir\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178751 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e690d2a-4d5a-4d38-bf04-fe6951258527-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178783 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5973ce7e-fa3d-45a5-9700-34e045a81edc-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.178807 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c476e668-a97b-4ce6-9eb1-d278b804cf1d-config\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.179067 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-service-ca\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.179155 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-config\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.179592 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-trusted-ca-bundle\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.179663 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c476e668-a97b-4ce6-9eb1-d278b804cf1d-config\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.179797 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8e690d2a-4d5a-4d38-bf04-fe6951258527-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180367 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-client\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180452 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e8f4d24-5c9f-4a63-8909-f38807a68a86-tmp\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180503 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c476e668-a97b-4ce6-9eb1-d278b804cf1d-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180528 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180587 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8916fc5f-e3fa-4e47-af78-923d1cd35984-serving-cert\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180619 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e690d2a-4d5a-4d38-bf04-fe6951258527-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180671 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tsb7t\" (UniqueName: \"kubernetes.io/projected/e5d36493-e813-44ad-9206-003a1ed39135-kube-api-access-tsb7t\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180697 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vmsrc\" (UniqueName: \"kubernetes.io/projected/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-kube-api-access-vmsrc\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180735 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57cff053-a179-4f6a-a38f-ddee39ec6c0b-metrics-tls\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180742 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/d7dc7990-5b90-402e-b2bc-53d94e232af4-oauth-serving-cert\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180765 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-config\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.180798 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f84bfa8-7177-4705-8591-f4e33059d290-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qqb9h\" (UID: \"5f84bfa8-7177-4705-8591-f4e33059d290\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.181052 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-images\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.181120 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-config\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.181148 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/89ae73bd-df87-4388-876a-2ed38972eb2b-trusted-ca\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.181070 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89ae73bd-df87-4388-876a-2ed38972eb2b-config\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.181865 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-config\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.182126 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/44a23ff1-70d4-4f26-b405-486ec014bf36-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.182623 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8916fc5f-e3fa-4e47-af78-923d1cd35984-tmp-dir\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.182848 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1459fc5-08d9-4442-ad34-0b310742cad4-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.182886 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.183323 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/6e8f4d24-5c9f-4a63-8909-f38807a68a86-tmp\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.184365 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-tmp-dir\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.184733 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/44a23ff1-70d4-4f26-b405-486ec014bf36-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185302 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/6e8f4d24-5c9f-4a63-8909-f38807a68a86-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185612 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/57cff053-a179-4f6a-a38f-ddee39ec6c0b-metrics-tls\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185623 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89ae73bd-df87-4388-876a-2ed38972eb2b-serving-cert\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185715 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-h8c98"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185737 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-l8qvm"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185749 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185760 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmt8j"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185769 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185779 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-crpbt"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185790 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-42lxx"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185801 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185811 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185822 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-nrsjt"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185832 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zffmj"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185841 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185850 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hkwvd"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185862 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-5w595"] Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.185994 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.186197 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c476e668-a97b-4ce6-9eb1-d278b804cf1d-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.187640 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-serving-cert\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.187641 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/d7dc7990-5b90-402e-b2bc-53d94e232af4-console-oauth-config\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.187850 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1459fc5-08d9-4442-ad34-0b310742cad4-serving-cert\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.188578 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e8f4d24-5c9f-4a63-8909-f38807a68a86-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.188681 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e5d36493-e813-44ad-9206-003a1ed39135-available-featuregates\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.188898 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5d36493-e813-44ad-9206-003a1ed39135-serving-cert\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.189150 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/32f15e1a-44ae-483f-8b19-d92afee5fdcc-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.189596 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdc47cbe-a3d3-432a-b8bb-399a35be1822-serviceca\") pod \"image-pruner-29520000-tmdgt\" (UID: \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\") " pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.190083 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5973ce7e-fa3d-45a5-9700-34e045a81edc-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.191629 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-client\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.192138 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5f84bfa8-7177-4705-8591-f4e33059d290-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-qqb9h\" (UID: \"5f84bfa8-7177-4705-8591-f4e33059d290\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.192908 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.197191 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.203554 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-serving-cert\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.216129 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.216791 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-service-ca\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.239622 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.247429 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-etcd-ca\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.256622 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.277078 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.297739 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.307706 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-config\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.337569 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.360666 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.377155 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.386732 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5973ce7e-fa3d-45a5-9700-34e045a81edc-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.397748 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.417701 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.428074 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5973ce7e-fa3d-45a5-9700-34e045a81edc-config\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.438288 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.457084 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.476979 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.484636 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/05370d66-0f2a-4733-9077-d916206c2b6e-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.497394 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.503924 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/05370d66-0f2a-4733-9077-d916206c2b6e-config\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.516653 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.537797 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.557648 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.577080 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.584211 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/32f15e1a-44ae-483f-8b19-d92afee5fdcc-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.597276 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.600680 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32f15e1a-44ae-483f-8b19-d92afee5fdcc-config\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.617670 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.625965 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8916fc5f-e3fa-4e47-af78-923d1cd35984-serving-cert\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.637048 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.643841 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8916fc5f-e3fa-4e47-af78-923d1cd35984-config\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.656936 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.678468 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.697214 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.717379 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.725584 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3067f2a2-db60-4372-88da-6d376071d340-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.737858 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.741625 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e690d2a-4d5a-4d38-bf04-fe6951258527-config\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.757722 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.767389 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e690d2a-4d5a-4d38-bf04-fe6951258527-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.778160 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.797280 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.817858 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.837077 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.858524 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.877362 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.898095 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.917115 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.937169 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.977688 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.985024 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcmqs\" (UniqueName: \"kubernetes.io/projected/b25d038c-e025-44e6-8bf4-c0334cd5bab4-kube-api-access-kcmqs\") pod \"route-controller-manager-776cdc94d6-gldqw\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:57 crc kubenswrapper[5114]: I0216 00:10:57.997383 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.017297 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.037868 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.056922 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.076625 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.097497 5114 request.go:752] "Waited before sending request" delay="1.01874453s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.099670 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.116738 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.136078 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.157180 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.177095 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.197156 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.218435 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.222084 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.238298 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.257988 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.312205 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bf7f\" (UniqueName: \"kubernetes.io/projected/24991a86-e06b-4e9e-8992-50fbe36dfe01-kube-api-access-9bf7f\") pod \"oauth-openshift-66458b6674-2jwtw\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.326698 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.332542 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcpfv\" (UniqueName: \"kubernetes.io/projected/38b84ebe-e4a0-41ea-a89a-7f8d0af48c70-kube-api-access-rcpfv\") pod \"machine-approver-54c688565-pfbq6\" (UID: \"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.341525 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cpsq\" (UniqueName: \"kubernetes.io/projected/85ed4f0e-0187-43d7-a456-eb14ee69d614-kube-api-access-5cpsq\") pod \"controller-manager-65b6cccf98-skdc2\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.372656 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rz2t\" (UniqueName: \"kubernetes.io/projected/14059e76-0bc1-4982-ad4f-3aa9254b420b-kube-api-access-9rz2t\") pod \"apiserver-8596bd845d-7n2z7\" (UID: \"14059e76-0bc1-4982-ad4f-3aa9254b420b\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.393638 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqj2g\" (UniqueName: \"kubernetes.io/projected/f47442a6-b454-45d5-8094-794e063f573d-kube-api-access-dqj2g\") pod \"downloads-747b44746d-x9wkk\" (UID: \"f47442a6-b454-45d5-8094-794e063f573d\") " pod="openshift-console/downloads-747b44746d-x9wkk" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.397907 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.411967 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hzgt\" (UniqueName: \"kubernetes.io/projected/4105502f-c677-4389-9d65-126fd4126663-kube-api-access-7hzgt\") pod \"apiserver-9ddfb9f55-nhfsj\" (UID: \"4105502f-c677-4389-9d65-126fd4126663\") " pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.418767 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.431065 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.437987 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.457504 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.477970 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.498488 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.506739 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.517089 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.517659 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.522665 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw"] Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.537824 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 16 00:10:58 crc kubenswrapper[5114]: W0216 00:10:58.542656 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb25d038c_e025_44e6_8bf4_c0334cd5bab4.slice/crio-4c2885367c4db550426da3390fce4f11f262f94af4365e1b367267916626aed3 WatchSource:0}: Error finding container 4c2885367c4db550426da3390fce4f11f262f94af4365e1b367267916626aed3: Status 404 returned error can't find the container with id 4c2885367c4db550426da3390fce4f11f262f94af4365e1b367267916626aed3 Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.543844 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-x9wkk" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.554008 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2jwtw"] Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.556826 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.578338 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.590616 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.599787 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.603302 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" event={"ID":"24991a86-e06b-4e9e-8992-50fbe36dfe01","Type":"ContainerStarted","Data":"82e19595520fa41374e48f699fce65dd9abf976787a9f4a89a0be6f5a8e74c19"} Feb 16 00:10:58 crc kubenswrapper[5114]: W0216 00:10:58.617697 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38b84ebe_e4a0_41ea_a89a_7f8d0af48c70.slice/crio-ae4a209b5b2758e93e2ef73326b2446c460bffb495c4f2a7547e7a9153b37e78 WatchSource:0}: Error finding container ae4a209b5b2758e93e2ef73326b2446c460bffb495c4f2a7547e7a9153b37e78: Status 404 returned error can't find the container with id ae4a209b5b2758e93e2ef73326b2446c460bffb495c4f2a7547e7a9153b37e78 Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.618727 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.620262 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" event={"ID":"b25d038c-e025-44e6-8bf4-c0334cd5bab4","Type":"ContainerStarted","Data":"4c2885367c4db550426da3390fce4f11f262f94af4365e1b367267916626aed3"} Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.637991 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.656857 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.661858 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7"] Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.677011 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.698073 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.718731 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.745707 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.752158 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-skdc2"] Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.757778 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.758362 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-nhfsj"] Feb 16 00:10:58 crc kubenswrapper[5114]: W0216 00:10:58.768706 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85ed4f0e_0187_43d7_a456_eb14ee69d614.slice/crio-9ef2d6e4750a4338431ce1c06e6d559a868ea5b6abf2614eb84c1f8c9db76ca4 WatchSource:0}: Error finding container 9ef2d6e4750a4338431ce1c06e6d559a868ea5b6abf2614eb84c1f8c9db76ca4: Status 404 returned error can't find the container with id 9ef2d6e4750a4338431ce1c06e6d559a868ea5b6abf2614eb84c1f8c9db76ca4 Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.774540 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-x9wkk"] Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.777099 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.796815 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.816478 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.836655 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.857488 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.897986 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.918000 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.937539 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.957745 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.981182 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 16 00:10:58 crc kubenswrapper[5114]: I0216 00:10:58.997432 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.017209 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.057496 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7dsr\" (UniqueName: \"kubernetes.io/projected/3067f2a2-db60-4372-88da-6d376071d340-kube-api-access-x7dsr\") pod \"machine-config-controller-f9cdd68f7-krh67\" (UID: \"3067f2a2-db60-4372-88da-6d376071d340\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.073459 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtwx\" (UniqueName: \"kubernetes.io/projected/4f2c237a-0f7f-4dd6-a35c-6533fbc3522e-kube-api-access-9xtwx\") pod \"machine-api-operator-755bb95488-5n27w\" (UID: \"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e\") " pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.092286 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmn9p\" (UniqueName: \"kubernetes.io/projected/a1459fc5-08d9-4442-ad34-0b310742cad4-kube-api-access-zmn9p\") pod \"authentication-operator-7f5c659b84-76xtj\" (UID: \"a1459fc5-08d9-4442-ad34-0b310742cad4\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.114510 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6htrs\" (UniqueName: \"kubernetes.io/projected/5973ce7e-fa3d-45a5-9700-34e045a81edc-kube-api-access-6htrs\") pod \"openshift-controller-manager-operator-686468bdd5-xdwmj\" (UID: \"5973ce7e-fa3d-45a5-9700-34e045a81edc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.115153 5114 request.go:752] "Waited before sending request" delay="1.936881496s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/serviceaccounts/cluster-image-registry-operator/token" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.117329 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.132209 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6e8f4d24-5c9f-4a63-8909-f38807a68a86-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.157051 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbldb\" (UniqueName: \"kubernetes.io/projected/57cff053-a179-4f6a-a38f-ddee39ec6c0b-kube-api-access-hbldb\") pod \"dns-operator-799b87ffcd-5wc8p\" (UID: \"57cff053-a179-4f6a-a38f-ddee39ec6c0b\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.177158 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/32f15e1a-44ae-483f-8b19-d92afee5fdcc-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-8x4kb\" (UID: \"32f15e1a-44ae-483f-8b19-d92afee5fdcc\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.192172 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p77xz\" (UniqueName: \"kubernetes.io/projected/89ae73bd-df87-4388-876a-2ed38972eb2b-kube-api-access-p77xz\") pod \"console-operator-67c89758df-sl2nf\" (UID: \"89ae73bd-df87-4388-876a-2ed38972eb2b\") " pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.217762 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/44a23ff1-70d4-4f26-b405-486ec014bf36-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.251927 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.278171 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.286737 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8e690d2a-4d5a-4d38-bf04-fe6951258527-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-btwkm\" (UID: \"8e690d2a-4d5a-4d38-bf04-fe6951258527\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.287381 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tw44\" (UniqueName: \"kubernetes.io/projected/6e8f4d24-5c9f-4a63-8909-f38807a68a86-kube-api-access-4tw44\") pod \"cluster-image-registry-operator-86c45576b9-t657p\" (UID: \"6e8f4d24-5c9f-4a63-8909-f38807a68a86\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.287863 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz4ch\" (UniqueName: \"kubernetes.io/projected/05370d66-0f2a-4733-9077-d916206c2b6e-kube-api-access-vz4ch\") pod \"kube-storage-version-migrator-operator-565b79b866-shgmb\" (UID: \"05370d66-0f2a-4733-9077-d916206c2b6e\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.304556 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmsrc\" (UniqueName: \"kubernetes.io/projected/52fcb5f2-d1d1-45d2-ba98-8619492efe7f-kube-api-access-vmsrc\") pod \"etcd-operator-69b85846b6-cpqbw\" (UID: \"52fcb5f2-d1d1-45d2-ba98-8619492efe7f\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.320120 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.325999 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.331969 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsb7t\" (UniqueName: \"kubernetes.io/projected/e5d36493-e813-44ad-9206-003a1ed39135-kube-api-access-tsb7t\") pod \"openshift-config-operator-5777786469-d8d6z\" (UID: \"e5d36493-e813-44ad-9206-003a1ed39135\") " pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.334688 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8916fc5f-e3fa-4e47-af78-923d1cd35984-kube-api-access\") pod \"kube-apiserver-operator-575994946d-9nrhq\" (UID: \"8916fc5f-e3fa-4e47-af78-923d1cd35984\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.354785 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-667cz\" (UniqueName: \"kubernetes.io/projected/c476e668-a97b-4ce6-9eb1-d278b804cf1d-kube-api-access-667cz\") pod \"openshift-apiserver-operator-846cbfc458-m5g99\" (UID: \"c476e668-a97b-4ce6-9eb1-d278b804cf1d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.370342 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.375292 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv9rp\" (UniqueName: \"kubernetes.io/projected/5f84bfa8-7177-4705-8591-f4e33059d290-kube-api-access-tv9rp\") pod \"cluster-samples-operator-6b564684c8-qqb9h\" (UID: \"5f84bfa8-7177-4705-8591-f4e33059d290\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.378024 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.386520 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.398089 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwwlf\" (UniqueName: \"kubernetes.io/projected/bdc47cbe-a3d3-432a-b8bb-399a35be1822-kube-api-access-vwwlf\") pod \"image-pruner-29520000-tmdgt\" (UID: \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\") " pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.401417 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.409565 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.411901 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67"] Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.415063 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xhsr\" (UniqueName: \"kubernetes.io/projected/d7dc7990-5b90-402e-b2bc-53d94e232af4-kube-api-access-7xhsr\") pod \"console-64d44f6ddf-l8qvm\" (UID: \"d7dc7990-5b90-402e-b2bc-53d94e232af4\") " pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.430090 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.439744 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.450239 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm2n5\" (UniqueName: \"kubernetes.io/projected/44a23ff1-70d4-4f26-b405-486ec014bf36-kube-api-access-qm2n5\") pod \"ingress-operator-6b9cb4dbcf-fj6tq\" (UID: \"44a23ff1-70d4-4f26-b405-486ec014bf36\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.457770 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.482493 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.501561 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.524955 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.537001 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.541048 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.564589 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.568316 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623141 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-trusted-ca\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623497 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f648600b-b3cf-4360-97e9-91a7b33ca283-srv-cert\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623516 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/747ba08a-df9e-422d-be4e-f2ababc30dea-ca-trust-extracted\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623532 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69d65fd4-cf6c-4743-bb16-57d591424ffb-metrics-tls\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623572 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjbm9\" (UniqueName: \"kubernetes.io/projected/98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d-kube-api-access-hjbm9\") pod \"migrator-866fcbc849-tdk6q\" (UID: \"98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623618 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-default-certificate\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623693 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-stats-auth\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623734 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0423bba-777b-4bd6-bef4-f126cc68f884-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-47pxl\" (UID: \"c0423bba-777b-4bd6-bef4-f126cc68f884\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623800 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjkk\" (UniqueName: \"kubernetes.io/projected/cd5244de-0460-4f31-914d-85541d3c975f-kube-api-access-cjjkk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-7qhtw\" (UID: \"cd5244de-0460-4f31-914d-85541d3c975f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623834 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn42g\" (UniqueName: \"kubernetes.io/projected/c0423bba-777b-4bd6-bef4-f126cc68f884-kube-api-access-bn42g\") pod \"package-server-manager-77f986bd66-47pxl\" (UID: \"c0423bba-777b-4bd6-bef4-f126cc68f884\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623849 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/762099f7-c3ba-482a-9910-765d1abc7388-tmpfs\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623888 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zthpx\" (UniqueName: \"kubernetes.io/projected/ef094cac-bbf6-4a7b-9549-724b916baf0e-kube-api-access-zthpx\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623914 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ef094cac-bbf6-4a7b-9549-724b916baf0e-signing-cabundle\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623929 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvvwb\" (UniqueName: \"kubernetes.io/projected/25c871eb-063b-4177-b300-f3280f9f7c6a-kube-api-access-nvvwb\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623951 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.623979 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc6vm\" (UniqueName: \"kubernetes.io/projected/f648600b-b3cf-4360-97e9-91a7b33ca283-kube-api-access-zc6vm\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624003 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762099f7-c3ba-482a-9910-765d1abc7388-apiservice-cert\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624017 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8d81cb10-abbd-4c04-9632-446be1e89c2b-ready\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624081 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd5244de-0460-4f31-914d-85541d3c975f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-7qhtw\" (UID: \"cd5244de-0460-4f31-914d-85541d3c975f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624100 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8d81cb10-abbd-4c04-9632-446be1e89c2b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624137 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7fnh\" (UniqueName: \"kubernetes.io/projected/7a94ef71-d05f-4af7-b557-e3c034866f73-kube-api-access-c7fnh\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624186 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-bound-sa-token\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624213 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25c871eb-063b-4177-b300-f3280f9f7c6a-config-volume\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624263 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-tls\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624282 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624297 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5478df7b-0c00-4c78-9a8e-1bdba1477cde-service-ca-bundle\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624371 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpl5g\" (UniqueName: \"kubernetes.io/projected/dfbe9d8e-db99-404d-ba9d-d173ab3b6434-kube-api-access-kpl5g\") pod \"multus-admission-controller-69db94689b-42lxx\" (UID: \"dfbe9d8e-db99-404d-ba9d-d173ab3b6434\") " pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624416 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/747ba08a-df9e-422d-be4e-f2ababc30dea-installation-pull-secrets\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624432 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25c871eb-063b-4177-b300-f3280f9f7c6a-secret-volume\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624463 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ef094cac-bbf6-4a7b-9549-724b916baf0e-signing-key\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624477 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762099f7-c3ba-482a-9910-765d1abc7388-webhook-cert\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624492 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcgnt\" (UniqueName: \"kubernetes.io/projected/5478df7b-0c00-4c78-9a8e-1bdba1477cde-kube-api-access-bcgnt\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624521 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dfbe9d8e-db99-404d-ba9d-d173ab3b6434-webhook-certs\") pod \"multus-admission-controller-69db94689b-42lxx\" (UID: \"dfbe9d8e-db99-404d-ba9d-d173ab3b6434\") " pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624536 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d65fd4-cf6c-4743-bb16-57d591424ffb-config-volume\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624593 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-srv-cert\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624611 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a94ef71-d05f-4af7-b557-e3c034866f73-config\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624627 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5p7z\" (UniqueName: \"kubernetes.io/projected/69d65fd4-cf6c-4743-bb16-57d591424ffb-kube-api-access-s5p7z\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624673 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-kube-api-access-5cqfg\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624688 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5hn\" (UniqueName: \"kubernetes.io/projected/144852dc-946d-4a33-8453-c3d5bb49127d-kube-api-access-zv5hn\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624703 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/289e41c8-1dae-4739-a9a5-41f112254197-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624773 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/289e41c8-1dae-4739-a9a5-41f112254197-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624788 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhdtn\" (UniqueName: \"kubernetes.io/projected/8d81cb10-abbd-4c04-9632-446be1e89c2b-kube-api-access-nhdtn\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624804 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69d65fd4-cf6c-4743-bb16-57d591424ffb-tmp-dir\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624832 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pctdn\" (UniqueName: \"kubernetes.io/projected/762099f7-c3ba-482a-9910-765d1abc7388-kube-api-access-pctdn\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624919 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a94ef71-d05f-4af7-b557-e3c034866f73-serving-cert\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624937 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-metrics-certs\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.624990 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/289e41c8-1dae-4739-a9a5-41f112254197-images\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625005 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8d81cb10-abbd-4c04-9632-446be1e89c2b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625020 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f648600b-b3cf-4360-97e9-91a7b33ca283-tmpfs\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625067 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-tmpfs\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625132 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/144852dc-946d-4a33-8453-c3d5bb49127d-tmp\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625175 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-certificates\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625207 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625223 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx2kf\" (UniqueName: \"kubernetes.io/projected/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-kube-api-access-nx2kf\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625295 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625312 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzs6l\" (UniqueName: \"kubernetes.io/projected/289e41c8-1dae-4739-a9a5-41f112254197-kube-api-access-hzs6l\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.625437 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f648600b-b3cf-4360-97e9-91a7b33ca283-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: E0216 00:10:59.632677 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.13266195 +0000 UTC m=+136.513938768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.638088 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.640354 5114 generic.go:358] "Generic (PLEG): container finished" podID="14059e76-0bc1-4982-ad4f-3aa9254b420b" containerID="bbb0c5c1f80f4d4dffa8a979b5caa7c0915a9e9c875776ef9ae2ebe650cb45f3" exitCode=0 Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.640868 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" event={"ID":"14059e76-0bc1-4982-ad4f-3aa9254b420b","Type":"ContainerDied","Data":"bbb0c5c1f80f4d4dffa8a979b5caa7c0915a9e9c875776ef9ae2ebe650cb45f3"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.640917 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" event={"ID":"14059e76-0bc1-4982-ad4f-3aa9254b420b","Type":"ContainerStarted","Data":"1de6a8a539af5367d267ff4fa79c7286916dc15b1e32b13f983bb2158a52b061"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.642602 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" event={"ID":"b25d038c-e025-44e6-8bf4-c0334cd5bab4","Type":"ContainerStarted","Data":"5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.643581 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.645957 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.650458 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.659071 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.659650 5114 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-gldqw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.659691 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" podUID="b25d038c-e025-44e6-8bf4-c0334cd5bab4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.660456 5114 generic.go:358] "Generic (PLEG): container finished" podID="4105502f-c677-4389-9d65-126fd4126663" containerID="b640e2032df3883df7c87fe07e7fc634a37d8d5915791647609efff51073ac27" exitCode=0 Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.660561 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" event={"ID":"4105502f-c677-4389-9d65-126fd4126663","Type":"ContainerDied","Data":"b640e2032df3883df7c87fe07e7fc634a37d8d5915791647609efff51073ac27"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.660592 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" event={"ID":"4105502f-c677-4389-9d65-126fd4126663","Type":"ContainerStarted","Data":"6804a460cefc868b00d2ddce0e5e9cb24b7dd1851cd5a4c8c5d59952d26d9f03"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.671765 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-x9wkk" event={"ID":"f47442a6-b454-45d5-8094-794e063f573d","Type":"ContainerStarted","Data":"c2c0be63ea217f5e07232cf04ee1e330b26876a93855a03cb7cfb2e0c15ad1df"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.671822 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-x9wkk" event={"ID":"f47442a6-b454-45d5-8094-794e063f573d","Type":"ContainerStarted","Data":"bf6cb4a1aa4e23f72cb9b8a9fa98d397c6884e895eaea16b3d592eb78c55cbf2"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.672184 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-x9wkk" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.685573 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" event={"ID":"24991a86-e06b-4e9e-8992-50fbe36dfe01","Type":"ContainerStarted","Data":"a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.686939 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.687020 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-x9wkk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.687062 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x9wkk" podUID="f47442a6-b454-45d5-8094-794e063f573d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.689582 5114 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-2jwtw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.689611 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" podUID="24991a86-e06b-4e9e-8992-50fbe36dfe01" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.709145 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" event={"ID":"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70","Type":"ContainerStarted","Data":"53fc9180fefdd8433e095f6693a4e1f46cd5a12a3e1cb2dce528097e455ac0b3"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.709212 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" event={"ID":"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70","Type":"ContainerStarted","Data":"e6d19f061b52de0e8a373a80479ca53aef27422f3c96c696a9192b1adee42580"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.709227 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" event={"ID":"38b84ebe-e4a0-41ea-a89a-7f8d0af48c70","Type":"ContainerStarted","Data":"ae4a209b5b2758e93e2ef73326b2446c460bffb495c4f2a7547e7a9153b37e78"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.714427 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" event={"ID":"85ed4f0e-0187-43d7-a456-eb14ee69d614","Type":"ContainerStarted","Data":"6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.714476 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" event={"ID":"85ed4f0e-0187-43d7-a456-eb14ee69d614","Type":"ContainerStarted","Data":"9ef2d6e4750a4338431ce1c06e6d559a868ea5b6abf2614eb84c1f8c9db76ca4"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.755889 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756071 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756275 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-tmpfs\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756314 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htwk5\" (UniqueName: \"kubernetes.io/projected/e690b054-0cba-4297-8a2b-c926b456a057-kube-api-access-htwk5\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756365 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/144852dc-946d-4a33-8453-c3d5bb49127d-tmp\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756364 5114 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-skdc2 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756407 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-certificates\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756437 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" podUID="85ed4f0e-0187-43d7-a456-eb14ee69d614" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756480 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.756505 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nx2kf\" (UniqueName: \"kubernetes.io/projected/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-kube-api-access-nx2kf\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: E0216 00:10:59.756620 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.25657715 +0000 UTC m=+136.637853968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.757513 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.757569 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hzs6l\" (UniqueName: \"kubernetes.io/projected/289e41c8-1dae-4739-a9a5-41f112254197-kube-api-access-hzs6l\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.757651 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f648600b-b3cf-4360-97e9-91a7b33ca283-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.757854 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-trusted-ca\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.757901 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f648600b-b3cf-4360-97e9-91a7b33ca283-srv-cert\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758135 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-socket-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758214 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/747ba08a-df9e-422d-be4e-f2ababc30dea-ca-trust-extracted\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758275 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69d65fd4-cf6c-4743-bb16-57d591424ffb-metrics-tls\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758320 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hjbm9\" (UniqueName: \"kubernetes.io/projected/98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d-kube-api-access-hjbm9\") pod \"migrator-866fcbc849-tdk6q\" (UID: \"98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758363 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-mountpoint-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758397 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e690b054-0cba-4297-8a2b-c926b456a057-certs\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758443 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-default-certificate\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758488 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-tmpfs\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758545 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-stats-auth\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758572 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0423bba-777b-4bd6-bef4-f126cc68f884-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-47pxl\" (UID: \"c0423bba-777b-4bd6-bef4-f126cc68f884\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758596 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-csi-data-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758621 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48fdn\" (UniqueName: \"kubernetes.io/projected/bde0f080-6423-454b-b0b5-30b9ee95e15e-kube-api-access-48fdn\") pod \"ingress-canary-hkwvd\" (UID: \"bde0f080-6423-454b-b0b5-30b9ee95e15e\") " pod="openshift-ingress-canary/ingress-canary-hkwvd" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758710 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bde0f080-6423-454b-b0b5-30b9ee95e15e-cert\") pod \"ingress-canary-hkwvd\" (UID: \"bde0f080-6423-454b-b0b5-30b9ee95e15e\") " pod="openshift-ingress-canary/ingress-canary-hkwvd" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758740 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjkk\" (UniqueName: \"kubernetes.io/projected/cd5244de-0460-4f31-914d-85541d3c975f-kube-api-access-cjjkk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-7qhtw\" (UID: \"cd5244de-0460-4f31-914d-85541d3c975f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.758763 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln9zg\" (UniqueName: \"kubernetes.io/projected/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-kube-api-access-ln9zg\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.759445 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.759863 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" event={"ID":"3067f2a2-db60-4372-88da-6d376071d340","Type":"ContainerStarted","Data":"bbb95520919955cc123bbd1dc0d0cec616634b91076a35a9d9e98e9ce704f8d2"} Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.760403 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/144852dc-946d-4a33-8453-c3d5bb49127d-tmp\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.762656 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-trusted-ca\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.765537 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-certificates\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.766584 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bn42g\" (UniqueName: \"kubernetes.io/projected/c0423bba-777b-4bd6-bef4-f126cc68f884-kube-api-access-bn42g\") pod \"package-server-manager-77f986bd66-47pxl\" (UID: \"c0423bba-777b-4bd6-bef4-f126cc68f884\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.767441 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/747ba08a-df9e-422d-be4e-f2ababc30dea-ca-trust-extracted\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.767486 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/762099f7-c3ba-482a-9910-765d1abc7388-tmpfs\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.767945 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zthpx\" (UniqueName: \"kubernetes.io/projected/ef094cac-bbf6-4a7b-9549-724b916baf0e-kube-api-access-zthpx\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.768011 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e690b054-0cba-4297-8a2b-c926b456a057-node-bootstrap-token\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.768051 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ef094cac-bbf6-4a7b-9549-724b916baf0e-signing-cabundle\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.768073 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nvvwb\" (UniqueName: \"kubernetes.io/projected/25c871eb-063b-4177-b300-f3280f9f7c6a-kube-api-access-nvvwb\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.768677 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f648600b-b3cf-4360-97e9-91a7b33ca283-srv-cert\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.769461 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.770572 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ef094cac-bbf6-4a7b-9549-724b916baf0e-signing-cabundle\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.770999 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zc6vm\" (UniqueName: \"kubernetes.io/projected/f648600b-b3cf-4360-97e9-91a7b33ca283-kube-api-access-zc6vm\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.771111 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762099f7-c3ba-482a-9910-765d1abc7388-apiservice-cert\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.771276 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8d81cb10-abbd-4c04-9632-446be1e89c2b-ready\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.771365 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd5244de-0460-4f31-914d-85541d3c975f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-7qhtw\" (UID: \"cd5244de-0460-4f31-914d-85541d3c975f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.771906 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8d81cb10-abbd-4c04-9632-446be1e89c2b-ready\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: E0216 00:10:59.773348 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.273295505 +0000 UTC m=+136.654572333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.773572 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8d81cb10-abbd-4c04-9632-446be1e89c2b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.774137 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8d81cb10-abbd-4c04-9632-446be1e89c2b-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.774221 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7fnh\" (UniqueName: \"kubernetes.io/projected/7a94ef71-d05f-4af7-b557-e3c034866f73-kube-api-access-c7fnh\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.774633 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-bound-sa-token\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.775217 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25c871eb-063b-4177-b300-f3280f9f7c6a-config-volume\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.775692 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-tls\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.775779 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.775806 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-default-certificate\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.775849 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5478df7b-0c00-4c78-9a8e-1bdba1477cde-service-ca-bundle\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.776240 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25c871eb-063b-4177-b300-f3280f9f7c6a-config-volume\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.777541 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5478df7b-0c00-4c78-9a8e-1bdba1477cde-service-ca-bundle\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.777634 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-registration-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.777639 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/c0423bba-777b-4bd6-bef4-f126cc68f884-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-47pxl\" (UID: \"c0423bba-777b-4bd6-bef4-f126cc68f884\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.777682 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kpl5g\" (UniqueName: \"kubernetes.io/projected/dfbe9d8e-db99-404d-ba9d-d173ab3b6434-kube-api-access-kpl5g\") pod \"multus-admission-controller-69db94689b-42lxx\" (UID: \"dfbe9d8e-db99-404d-ba9d-d173ab3b6434\") " pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.777758 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/747ba08a-df9e-422d-be4e-f2ababc30dea-installation-pull-secrets\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.778119 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25c871eb-063b-4177-b300-f3280f9f7c6a-secret-volume\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.778734 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ef094cac-bbf6-4a7b-9549-724b916baf0e-signing-key\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780372 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762099f7-c3ba-482a-9910-765d1abc7388-webhook-cert\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780432 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bcgnt\" (UniqueName: \"kubernetes.io/projected/5478df7b-0c00-4c78-9a8e-1bdba1477cde-kube-api-access-bcgnt\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780492 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dfbe9d8e-db99-404d-ba9d-d173ab3b6434-webhook-certs\") pod \"multus-admission-controller-69db94689b-42lxx\" (UID: \"dfbe9d8e-db99-404d-ba9d-d173ab3b6434\") " pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780514 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d65fd4-cf6c-4743-bb16-57d591424ffb-config-volume\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780630 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-srv-cert\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780667 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a94ef71-d05f-4af7-b557-e3c034866f73-config\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780703 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s5p7z\" (UniqueName: \"kubernetes.io/projected/69d65fd4-cf6c-4743-bb16-57d591424ffb-kube-api-access-s5p7z\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780759 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-kube-api-access-5cqfg\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780789 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zv5hn\" (UniqueName: \"kubernetes.io/projected/144852dc-946d-4a33-8453-c3d5bb49127d-kube-api-access-zv5hn\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780819 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/289e41c8-1dae-4739-a9a5-41f112254197-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780891 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-plugins-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780920 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/289e41c8-1dae-4739-a9a5-41f112254197-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780945 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nhdtn\" (UniqueName: \"kubernetes.io/projected/8d81cb10-abbd-4c04-9632-446be1e89c2b-kube-api-access-nhdtn\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780970 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69d65fd4-cf6c-4743-bb16-57d591424ffb-tmp-dir\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.780998 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pctdn\" (UniqueName: \"kubernetes.io/projected/762099f7-c3ba-482a-9910-765d1abc7388-kube-api-access-pctdn\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.781074 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a94ef71-d05f-4af7-b557-e3c034866f73-serving-cert\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.781102 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-metrics-certs\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.781153 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/289e41c8-1dae-4739-a9a5-41f112254197-images\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.781179 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8d81cb10-abbd-4c04-9632-446be1e89c2b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.781204 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f648600b-b3cf-4360-97e9-91a7b33ca283-tmpfs\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.784606 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/69d65fd4-cf6c-4743-bb16-57d591424ffb-config-volume\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.785309 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/762099f7-c3ba-482a-9910-765d1abc7388-webhook-cert\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.786167 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/289e41c8-1dae-4739-a9a5-41f112254197-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.786952 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx2kf\" (UniqueName: \"kubernetes.io/projected/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-kube-api-access-nx2kf\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.787495 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/69d65fd4-cf6c-4743-bb16-57d591424ffb-tmp-dir\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.789597 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a94ef71-d05f-4af7-b557-e3c034866f73-config\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.795051 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/762099f7-c3ba-482a-9910-765d1abc7388-tmpfs\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.795532 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8d81cb10-abbd-4c04-9632-446be1e89c2b-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.804471 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f648600b-b3cf-4360-97e9-91a7b33ca283-tmpfs\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.807675 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.816417 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-stats-auth\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.817107 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dfbe9d8e-db99-404d-ba9d-d173ab3b6434-webhook-certs\") pod \"multus-admission-controller-69db94689b-42lxx\" (UID: \"dfbe9d8e-db99-404d-ba9d-d173ab3b6434\") " pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.817222 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f648600b-b3cf-4360-97e9-91a7b33ca283-profile-collector-cert\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.817741 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/747ba08a-df9e-422d-be4e-f2ababc30dea-installation-pull-secrets\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.817748 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/762099f7-c3ba-482a-9910-765d1abc7388-apiservice-cert\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.818189 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ef094cac-bbf6-4a7b-9549-724b916baf0e-signing-key\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.818354 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5478df7b-0c00-4c78-9a8e-1bdba1477cde-metrics-certs\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.818357 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/e63eb2b4-7d50-4fa4-b866-2a07239fda8e-srv-cert\") pod \"catalog-operator-75ff9f647d-rswb4\" (UID: \"e63eb2b4-7d50-4fa4-b866-2a07239fda8e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.818528 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.819102 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/289e41c8-1dae-4739-a9a5-41f112254197-images\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.825885 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/289e41c8-1dae-4739-a9a5-41f112254197-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.825966 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cd5244de-0460-4f31-914d-85541d3c975f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-7qhtw\" (UID: \"cd5244de-0460-4f31-914d-85541d3c975f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.826015 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjbm9\" (UniqueName: \"kubernetes.io/projected/98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d-kube-api-access-hjbm9\") pod \"migrator-866fcbc849-tdk6q\" (UID: \"98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.826783 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/69d65fd4-cf6c-4743-bb16-57d591424ffb-metrics-tls\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.827635 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a94ef71-d05f-4af7-b557-e3c034866f73-serving-cert\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.831032 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjkk\" (UniqueName: \"kubernetes.io/projected/cd5244de-0460-4f31-914d-85541d3c975f-kube-api-access-cjjkk\") pod \"control-plane-machine-set-operator-75ffdb6fcd-7qhtw\" (UID: \"cd5244de-0460-4f31-914d-85541d3c975f\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.832056 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-tls\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.833452 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25c871eb-063b-4177-b300-f3280f9f7c6a-secret-volume\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.840206 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn42g\" (UniqueName: \"kubernetes.io/projected/c0423bba-777b-4bd6-bef4-f126cc68f884-kube-api-access-bn42g\") pod \"package-server-manager-77f986bd66-47pxl\" (UID: \"c0423bba-777b-4bd6-bef4-f126cc68f884\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.854032 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzs6l\" (UniqueName: \"kubernetes.io/projected/289e41c8-1dae-4739-a9a5-41f112254197-kube-api-access-hzs6l\") pod \"machine-config-operator-67c9d58cbb-rnn26\" (UID: \"289e41c8-1dae-4739-a9a5-41f112254197\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.881118 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc6vm\" (UniqueName: \"kubernetes.io/projected/f648600b-b3cf-4360-97e9-91a7b33ca283-kube-api-access-zc6vm\") pod \"olm-operator-5cdf44d969-9kq9m\" (UID: \"f648600b-b3cf-4360-97e9-91a7b33ca283\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.883821 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:59 crc kubenswrapper[5114]: E0216 00:10:59.884151 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.384072064 +0000 UTC m=+136.765348902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.884221 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-registration-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.884603 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-plugins-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.884782 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-htwk5\" (UniqueName: \"kubernetes.io/projected/e690b054-0cba-4297-8a2b-c926b456a057-kube-api-access-htwk5\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.885726 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-registration-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.887916 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-socket-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.887972 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-mountpoint-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.887995 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e690b054-0cba-4297-8a2b-c926b456a057-certs\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888055 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-csi-data-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888083 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-48fdn\" (UniqueName: \"kubernetes.io/projected/bde0f080-6423-454b-b0b5-30b9ee95e15e-kube-api-access-48fdn\") pod \"ingress-canary-hkwvd\" (UID: \"bde0f080-6423-454b-b0b5-30b9ee95e15e\") " pod="openshift-ingress-canary/ingress-canary-hkwvd" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888117 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bde0f080-6423-454b-b0b5-30b9ee95e15e-cert\") pod \"ingress-canary-hkwvd\" (UID: \"bde0f080-6423-454b-b0b5-30b9ee95e15e\") " pod="openshift-ingress-canary/ingress-canary-hkwvd" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888142 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ln9zg\" (UniqueName: \"kubernetes.io/projected/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-kube-api-access-ln9zg\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888181 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-5n27w"] Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888223 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj"] Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888239 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-5wc8p"] Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888196 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e690b054-0cba-4297-8a2b-c926b456a057-node-bootstrap-token\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888324 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888614 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-csi-data-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888835 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-plugins-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888853 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-mountpoint-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.888946 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-socket-dir\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:10:59 crc kubenswrapper[5114]: E0216 00:10:59.889196 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.389177882 +0000 UTC m=+136.770454700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.894519 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bde0f080-6423-454b-b0b5-30b9ee95e15e-cert\") pod \"ingress-canary-hkwvd\" (UID: \"bde0f080-6423-454b-b0b5-30b9ee95e15e\") " pod="openshift-ingress-canary/ingress-canary-hkwvd" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.894951 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/e690b054-0cba-4297-8a2b-c926b456a057-node-bootstrap-token\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.895087 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/e690b054-0cba-4297-8a2b-c926b456a057-certs\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.902111 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zthpx\" (UniqueName: \"kubernetes.io/projected/ef094cac-bbf6-4a7b-9549-724b916baf0e-kube-api-access-zthpx\") pod \"service-ca-74545575db-nrsjt\" (UID: \"ef094cac-bbf6-4a7b-9549-724b916baf0e\") " pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.914365 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvvwb\" (UniqueName: \"kubernetes.io/projected/25c871eb-063b-4177-b300-f3280f9f7c6a-kube-api-access-nvvwb\") pod \"collect-profiles-29520000-fprp5\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.940738 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7fnh\" (UniqueName: \"kubernetes.io/projected/7a94ef71-d05f-4af7-b557-e3c034866f73-kube-api-access-c7fnh\") pod \"service-ca-operator-5b9c976747-hsddz\" (UID: \"7a94ef71-d05f-4af7-b557-e3c034866f73\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.956969 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-bound-sa-token\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.992114 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:10:59 crc kubenswrapper[5114]: E0216 00:10:59.992621 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.492603629 +0000 UTC m=+136.873880447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:10:59 crc kubenswrapper[5114]: I0216 00:10:59.993129 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpl5g\" (UniqueName: \"kubernetes.io/projected/dfbe9d8e-db99-404d-ba9d-d173ab3b6434-kube-api-access-kpl5g\") pod \"multus-admission-controller-69db94689b-42lxx\" (UID: \"dfbe9d8e-db99-404d-ba9d-d173ab3b6434\") " pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.003178 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcgnt\" (UniqueName: \"kubernetes.io/projected/5478df7b-0c00-4c78-9a8e-1bdba1477cde-kube-api-access-bcgnt\") pod \"router-default-68cf44c8b8-vdzjf\" (UID: \"5478df7b-0c00-4c78-9a8e-1bdba1477cde\") " pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.028772 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pctdn\" (UniqueName: \"kubernetes.io/projected/762099f7-c3ba-482a-9910-765d1abc7388-kube-api-access-pctdn\") pod \"packageserver-7d4fc7d867-nvr4r\" (UID: \"762099f7-c3ba-482a-9910-765d1abc7388\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.036565 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-kube-api-access-5cqfg\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.038967 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.039178 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.048797 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.057280 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-sl2nf"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.064648 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.065748 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhdtn\" (UniqueName: \"kubernetes.io/projected/8d81cb10-abbd-4c04-9632-446be1e89c2b-kube-api-access-nhdtn\") pod \"cni-sysctl-allowlist-ds-n7nf8\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.075411 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.078927 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.090049 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.093561 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.093748 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.094360 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.594326027 +0000 UTC m=+136.975602845 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.101889 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.106686 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.120059 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zv5hn\" (UniqueName: \"kubernetes.io/projected/144852dc-946d-4a33-8453-c3d5bb49127d-kube-api-access-zv5hn\") pod \"marketplace-operator-547dbd544d-crpbt\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.120409 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-nrsjt" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.123114 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5p7z\" (UniqueName: \"kubernetes.io/projected/69d65fd4-cf6c-4743-bb16-57d591424ffb-kube-api-access-s5p7z\") pod \"dns-default-h8c98\" (UID: \"69d65fd4-cf6c-4743-bb16-57d591424ffb\") " pod="openshift-dns/dns-default-h8c98" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.130934 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.134656 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-htwk5\" (UniqueName: \"kubernetes.io/projected/e690b054-0cba-4297-8a2b-c926b456a057-kube-api-access-htwk5\") pod \"machine-config-server-5w595\" (UID: \"e690b054-0cba-4297-8a2b-c926b456a057\") " pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.149495 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.165803 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln9zg\" (UniqueName: \"kubernetes.io/projected/98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9-kube-api-access-ln9zg\") pod \"csi-hostpathplugin-zffmj\" (UID: \"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9\") " pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.178960 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-h8c98" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.179432 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.192568 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-48fdn\" (UniqueName: \"kubernetes.io/projected/bde0f080-6423-454b-b0b5-30b9ee95e15e-kube-api-access-48fdn\") pod \"ingress-canary-hkwvd\" (UID: \"bde0f080-6423-454b-b0b5-30b9ee95e15e\") " pod="openshift-ingress-canary/ingress-canary-hkwvd" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.204521 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.205330 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.705311543 +0000 UTC m=+137.086588371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.219417 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-5w595" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.225619 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-zffmj" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.310814 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.311230 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.811215111 +0000 UTC m=+137.192492029 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: W0216 00:11:00.321435 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89ae73bd_df87_4388_876a_2ed38972eb2b.slice/crio-88a878f266d5ef6cd118e908b9f55ef8aa89ac998d9c162a4a9a0126d5eec0d1 WatchSource:0}: Error finding container 88a878f266d5ef6cd118e908b9f55ef8aa89ac998d9c162a4a9a0126d5eec0d1: Status 404 returned error can't find the container with id 88a878f266d5ef6cd118e908b9f55ef8aa89ac998d9c162a4a9a0126d5eec0d1 Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.324565 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.324629 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.348830 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.351954 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.375943 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.413236 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.413642 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.413819 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.913798214 +0000 UTC m=+137.295075032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.414041 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.414328 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:00.914321369 +0000 UTC m=+137.295598187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.444991 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.472210 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq"] Feb 16 00:11:00 crc kubenswrapper[5114]: W0216 00:11:00.478511 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05370d66_0f2a_4733_9077_d916206c2b6e.slice/crio-714f049d75dd345e7daff3fae920bffcc75a21e8fb5ed181d43e59b8aa182084 WatchSource:0}: Error finding container 714f049d75dd345e7daff3fae920bffcc75a21e8fb5ed181d43e59b8aa182084: Status 404 returned error can't find the container with id 714f049d75dd345e7daff3fae920bffcc75a21e8fb5ed181d43e59b8aa182084 Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.480469 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.487110 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-hkwvd" Feb 16 00:11:00 crc kubenswrapper[5114]: W0216 00:11:00.513700 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e690d2a_4d5a_4d38_bf04_fe6951258527.slice/crio-5502a57cd0430aa69b3f2091f30c22baa221b6341ba1d1f91cdcd03eeae51929 WatchSource:0}: Error finding container 5502a57cd0430aa69b3f2091f30c22baa221b6341ba1d1f91cdcd03eeae51929: Status 404 returned error can't find the container with id 5502a57cd0430aa69b3f2091f30c22baa221b6341ba1d1f91cdcd03eeae51929 Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.514362 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.514759 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.014746209 +0000 UTC m=+137.396023027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.527369 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-d8d6z"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.534020 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-l8qvm"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.600647 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29520000-tmdgt"] Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.642117 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.142086819 +0000 UTC m=+137.523363637 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.651126 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.713206 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.727143 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.727314 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.749053 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-x9wkk" podStartSLOduration=113.749025188 podStartE2EDuration="1m53.749025188s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:00.719206734 +0000 UTC m=+137.100483552" watchObservedRunningTime="2026-02-16 00:11:00.749025188 +0000 UTC m=+137.130302006" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.751428 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.752435 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.752749 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.252732616 +0000 UTC m=+137.634009434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.759347 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" podStartSLOduration=113.759322936 podStartE2EDuration="1m53.759322936s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:00.758130572 +0000 UTC m=+137.139407390" watchObservedRunningTime="2026-02-16 00:11:00.759322936 +0000 UTC m=+137.140599754" Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.767716 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" event={"ID":"e5d36493-e813-44ad-9206-003a1ed39135","Type":"ContainerStarted","Data":"66eaeb3555e6b2c095b64b70c2d24e5ddc171ff1e82af6a82f6ce6fcdc63276a"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.780919 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" event={"ID":"05370d66-0f2a-4733-9077-d916206c2b6e","Type":"ContainerStarted","Data":"714f049d75dd345e7daff3fae920bffcc75a21e8fb5ed181d43e59b8aa182084"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.781642 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl"] Feb 16 00:11:00 crc kubenswrapper[5114]: W0216 00:11:00.801097 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode63eb2b4_7d50_4fa4_b866_2a07239fda8e.slice/crio-e5d393af317deaee72773890f9a1766681be0cc6e919530f784a52abe5e58ea4 WatchSource:0}: Error finding container e5d393af317deaee72773890f9a1766681be0cc6e919530f784a52abe5e58ea4: Status 404 returned error can't find the container with id e5d393af317deaee72773890f9a1766681be0cc6e919530f784a52abe5e58ea4 Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.801846 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" event={"ID":"5478df7b-0c00-4c78-9a8e-1bdba1477cde","Type":"ContainerStarted","Data":"98e3a610d20292502e5dcdf978f5898828070a0364ff3e267a12d96f55ecb58d"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.811647 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-l8qvm" event={"ID":"d7dc7990-5b90-402e-b2bc-53d94e232af4","Type":"ContainerStarted","Data":"81142b5f2692d366e309ebf864f683aa136364563b538b34c4b927880ba9e21d"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.813429 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" event={"ID":"5f84bfa8-7177-4705-8591-f4e33059d290","Type":"ContainerStarted","Data":"554b1e96d533304aa32ac2c2ed877aeaeeb3e16e066c4700127c366874f78dae"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.813968 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" event={"ID":"8e690d2a-4d5a-4d38-bf04-fe6951258527","Type":"ContainerStarted","Data":"5502a57cd0430aa69b3f2091f30c22baa221b6341ba1d1f91cdcd03eeae51929"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.824573 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" event={"ID":"5973ce7e-fa3d-45a5-9700-34e045a81edc","Type":"ContainerStarted","Data":"7a3a6cc3c810f8e695f76fc3e4d0fdec1e927c1cb140477c724d91d7bb3b1c24"} Feb 16 00:11:00 crc kubenswrapper[5114]: W0216 00:11:00.824777 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc476e668_a97b_4ce6_9eb1_d278b804cf1d.slice/crio-ec12a4fbfd1c493d045121dcd8c98d5dde5da489a76bf0c8048217d1e8874789 WatchSource:0}: Error finding container ec12a4fbfd1c493d045121dcd8c98d5dde5da489a76bf0c8048217d1e8874789: Status 404 returned error can't find the container with id ec12a4fbfd1c493d045121dcd8c98d5dde5da489a76bf0c8048217d1e8874789 Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.832261 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" event={"ID":"44a23ff1-70d4-4f26-b405-486ec014bf36","Type":"ContainerStarted","Data":"0caf7dde6d85b88f66606c938718b5c7cf4cd699d251186e79b5189356ed70f9"} Feb 16 00:11:00 crc kubenswrapper[5114]: W0216 00:11:00.835585 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod289e41c8_1dae_4739_a9a5_41f112254197.slice/crio-7cd858d037379ca24052f12ee7e2553310fd586410489279b60578915acc052b WatchSource:0}: Error finding container 7cd858d037379ca24052f12ee7e2553310fd586410489279b60578915acc052b: Status 404 returned error can't find the container with id 7cd858d037379ca24052f12ee7e2553310fd586410489279b60578915acc052b Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.835673 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" event={"ID":"89ae73bd-df87-4388-876a-2ed38972eb2b","Type":"ContainerStarted","Data":"88a878f266d5ef6cd118e908b9f55ef8aa89ac998d9c162a4a9a0126d5eec0d1"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.845305 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" event={"ID":"a1459fc5-08d9-4442-ad34-0b310742cad4","Type":"ContainerStarted","Data":"4a8e37f42caa628b810596a3f9f27c4d974528cd2db01c1a410c44923c1655a9"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.845352 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" event={"ID":"a1459fc5-08d9-4442-ad34-0b310742cad4","Type":"ContainerStarted","Data":"64856dd1ced9e2aeef2a2652929ea16ba953ee3984e55527bad1627b3ef9af9b"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.854655 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.855002 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.354988059 +0000 UTC m=+137.736264877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.862416 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q"] Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.869983 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" event={"ID":"52fcb5f2-d1d1-45d2-ba98-8619492efe7f","Type":"ContainerStarted","Data":"93c1ce7035508275fcbd54050eb38124fd55feb6bc1934860ae04c38f016197d"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.912378 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" event={"ID":"6e8f4d24-5c9f-4a63-8909-f38807a68a86","Type":"ContainerStarted","Data":"85fc0a6ac6bb95f9c85de22fd9a33be304c0e47ba2cf38d4b361ae851c3b045d"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.922783 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" event={"ID":"3067f2a2-db60-4372-88da-6d376071d340","Type":"ContainerStarted","Data":"d1bd40a81a665381a7f032c8dcd7c60e77c2c5c26a2401f795c38ad390132612"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.922827 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" event={"ID":"3067f2a2-db60-4372-88da-6d376071d340","Type":"ContainerStarted","Data":"7cd93fab1beef699c884248f8fcef460ef49698d91ed60f2de5c311a4e337f25"} Feb 16 00:11:00 crc kubenswrapper[5114]: W0216 00:11:00.938424 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d81cb10_abbd_4c04_9632_446be1e89c2b.slice/crio-815a9e1b5d1248674a5d37c6683c25ee9f6552d837d8a4abfb2c9d2098cb7f27 WatchSource:0}: Error finding container 815a9e1b5d1248674a5d37c6683c25ee9f6552d837d8a4abfb2c9d2098cb7f27: Status 404 returned error can't find the container with id 815a9e1b5d1248674a5d37c6683c25ee9f6552d837d8a4abfb2c9d2098cb7f27 Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.956976 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:00 crc kubenswrapper[5114]: E0216 00:11:00.959082 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.459061924 +0000 UTC m=+137.840338742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.980705 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" event={"ID":"14059e76-0bc1-4982-ad4f-3aa9254b420b","Type":"ContainerStarted","Data":"3a129eae23fb20ad79be0676031415956dee07857682f2c29350e9744151f07d"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.986810 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" event={"ID":"32f15e1a-44ae-483f-8b19-d92afee5fdcc","Type":"ContainerStarted","Data":"0b56f60ac9bffaa41a546013416f7e7dd7f9e4dedd91ee14e6bdd43542bf56ff"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.990008 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" event={"ID":"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e","Type":"ContainerStarted","Data":"6ea0b22657203a41981c8dc083a8b0dca0d2e0b8f0ed172dee22e132d13d866d"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.990159 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" event={"ID":"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e","Type":"ContainerStarted","Data":"e0a2df41db62af67ddb18b6dd3b84499526431171260e934b66aacc5b442c3f7"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.991260 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" event={"ID":"57cff053-a179-4f6a-a38f-ddee39ec6c0b","Type":"ContainerStarted","Data":"f7b9e6bfe8e1ae934b03c25934e9fd6f3c4c0fc37a8aafc27d4e0db6ec0e6341"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.995200 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" event={"ID":"4105502f-c677-4389-9d65-126fd4126663","Type":"ContainerStarted","Data":"5b37d5c33fa71836387a1ad07afa4071f6c75b32234f8105c00ad80531a68a70"} Feb 16 00:11:00 crc kubenswrapper[5114]: I0216 00:11:00.995976 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" event={"ID":"8916fc5f-e3fa-4e47-af78-923d1cd35984","Type":"ContainerStarted","Data":"ead680effc67a52c14636d24d6078da39849c6d770c283f9ea8790f88eb4421d"} Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.008202 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-x9wkk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.008317 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x9wkk" podUID="f47442a6-b454-45d5-8094-794e063f573d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.012931 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.016923 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.058567 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.060729 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.56070725 +0000 UTC m=+137.941984128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.079892 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" podStartSLOduration=113.079875055 podStartE2EDuration="1m53.079875055s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:01.068565617 +0000 UTC m=+137.449842445" watchObservedRunningTime="2026-02-16 00:11:01.079875055 +0000 UTC m=+137.461151873" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.159500 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.159724 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.659703658 +0000 UTC m=+138.040980476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.161275 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.171991 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.671963874 +0000 UTC m=+138.053240692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.240481 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-h8c98"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.242288 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.250620 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36092: no serving certificate available for the kubelet" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.269006 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.270006 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.769990244 +0000 UTC m=+138.151267062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.350405 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36106: no serving certificate available for the kubelet" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.370973 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.371508 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.871481245 +0000 UTC m=+138.252758063 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.427589 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.447027 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.450957 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36112: no serving certificate available for the kubelet" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.458845 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-nrsjt"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.470790 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.472678 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.472886 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.972827582 +0000 UTC m=+138.354104400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.473609 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.473975 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:01.973944004 +0000 UTC m=+138.355220822 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.513624 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-crpbt"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.554637 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36124: no serving certificate available for the kubelet" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.560853 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-hkwvd"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.574889 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.575300 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.07526558 +0000 UTC m=+138.456542398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.576595 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.577006 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.07698846 +0000 UTC m=+138.458265278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.584789 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-pfbq6" podStartSLOduration=115.584765396 podStartE2EDuration="1m55.584765396s" podCreationTimestamp="2026-02-16 00:09:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:01.583622112 +0000 UTC m=+137.964898930" watchObservedRunningTime="2026-02-16 00:11:01.584765396 +0000 UTC m=+137.966042214" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.605360 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.670440 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36126: no serving certificate available for the kubelet" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.675632 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-zffmj"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.677711 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.678047 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.178024448 +0000 UTC m=+138.559301266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.706912 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-42lxx"] Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.776426 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36132: no serving certificate available for the kubelet" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.783404 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.784017 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.283992709 +0000 UTC m=+138.665269527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.884695 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.885669 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.385235632 +0000 UTC m=+138.766512450 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.918048 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" podStartSLOduration=113.918033733 podStartE2EDuration="1m53.918033733s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:01.917503737 +0000 UTC m=+138.298780555" watchObservedRunningTime="2026-02-16 00:11:01.918033733 +0000 UTC m=+138.299310551" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.969767 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36134: no serving certificate available for the kubelet" Feb 16 00:11:01 crc kubenswrapper[5114]: I0216 00:11:01.986279 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:01 crc kubenswrapper[5114]: E0216 00:11:01.986585 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.486573289 +0000 UTC m=+138.867850107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.011370 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" event={"ID":"05370d66-0f2a-4733-9077-d916206c2b6e","Type":"ContainerStarted","Data":"2acb46b7af7dd8ee2b19dd64da4aef8fe67d75b56958b82e91f9e641c1c432bf"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.012664 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hkwvd" event={"ID":"bde0f080-6423-454b-b0b5-30b9ee95e15e","Type":"ContainerStarted","Data":"175e1c06787b9fee229387bea9fb779822bcfe09916ea51325c4fcc0e65d0813"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.016454 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" event={"ID":"762099f7-c3ba-482a-9910-765d1abc7388","Type":"ContainerStarted","Data":"d5e4e927371f136c20b0fa9794a44d00266a573760f9f4f2a008262712ca0f26"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.031457 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-76xtj" podStartSLOduration=115.031313065 podStartE2EDuration="1m55.031313065s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:01.994808798 +0000 UTC m=+138.376085616" watchObservedRunningTime="2026-02-16 00:11:02.031313065 +0000 UTC m=+138.412589883" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.032027 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-krh67" podStartSLOduration=114.032020806 podStartE2EDuration="1m54.032020806s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.030549133 +0000 UTC m=+138.411825951" watchObservedRunningTime="2026-02-16 00:11:02.032020806 +0000 UTC m=+138.413297624" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.037517 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" event={"ID":"5478df7b-0c00-4c78-9a8e-1bdba1477cde","Type":"ContainerStarted","Data":"8b6caf907c3a9b3ad4a9252c05b38367c98c5426b93a48c0a66024f79edee9a4"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.063517 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-l8qvm" event={"ID":"d7dc7990-5b90-402e-b2bc-53d94e232af4","Type":"ContainerStarted","Data":"ccde35709da906f02dae19a57b58d372d953d1556170c78d81b555748a6f8780"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.093484 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.094658 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.59464025 +0000 UTC m=+138.975917068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.116052 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" event={"ID":"5973ce7e-fa3d-45a5-9700-34e045a81edc","Type":"ContainerStarted","Data":"3ff7ae14f4bd45245fb5c4cbb94b464872d0fbc47a92c27771431b0473e0bfe6"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.129977 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" event={"ID":"98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d","Type":"ContainerStarted","Data":"9c4626a9c59f278082423ace00d4110fc03c32767266d34e579956fd0a9d7449"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.132939 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" event={"ID":"7a94ef71-d05f-4af7-b557-e3c034866f73","Type":"ContainerStarted","Data":"be2b5b8633544a53589f3b848b861cca0925cdf999835ffba1c2ef946658a730"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.139462 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-nrsjt" event={"ID":"ef094cac-bbf6-4a7b-9549-724b916baf0e","Type":"ContainerStarted","Data":"1617a6104bb2807401d53afdfd63fbfa90ac0b7d51b4a9fd784b5aaf43b4f587"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.151674 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h8c98" event={"ID":"69d65fd4-cf6c-4743-bb16-57d591424ffb","Type":"ContainerStarted","Data":"221d70af5d180be83dacf30be16168646c939d8b61b3d7f1967057871b3878e1"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.155389 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" podStartSLOduration=114.15537056 podStartE2EDuration="1m54.15537056s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.154211387 +0000 UTC m=+138.535488225" watchObservedRunningTime="2026-02-16 00:11:02.15537056 +0000 UTC m=+138.536647388" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.176944 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" event={"ID":"57cff053-a179-4f6a-a38f-ddee39ec6c0b","Type":"ContainerStarted","Data":"65b751865e8f76e43c859e13ef089cfb8f2cea8ae8e6eec1d49d15755b269aca"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.182003 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29520000-tmdgt" event={"ID":"bdc47cbe-a3d3-432a-b8bb-399a35be1822","Type":"ContainerStarted","Data":"60f2c0dea61e85edb3e8e336d4ec8987f3e5bb7b7a5f7650201e82dd556534a0"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.182049 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29520000-tmdgt" event={"ID":"bdc47cbe-a3d3-432a-b8bb-399a35be1822","Type":"ContainerStarted","Data":"582cafe72e294e516111c1c8151f070f1c66724e4b7952bc46ffa3937586a314"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.187825 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" event={"ID":"e5d36493-e813-44ad-9206-003a1ed39135","Type":"ContainerStarted","Data":"180c97ec3ead7678a27042106f9037b318992c9528212513e56eb41c17d69498"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.195468 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.196153 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5w595" event={"ID":"e690b054-0cba-4297-8a2b-c926b456a057","Type":"ContainerStarted","Data":"69d2a07540f178601c0ccf34e4a70273b08fed0229b3f4a706008ad381a92f7c"} Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.196921 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.696905334 +0000 UTC m=+139.078182152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.201863 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" event={"ID":"25c871eb-063b-4177-b300-f3280f9f7c6a","Type":"ContainerStarted","Data":"6911b7256e9edb22084eaeb0151130e5b62b6b489932d69234cb73c964dd31cf"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.214733 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" event={"ID":"e63eb2b4-7d50-4fa4-b866-2a07239fda8e","Type":"ContainerStarted","Data":"e5d393af317deaee72773890f9a1766681be0cc6e919530f784a52abe5e58ea4"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.225824 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" event={"ID":"f648600b-b3cf-4360-97e9-91a7b33ca283","Type":"ContainerStarted","Data":"eaa7963ee99f431b54665ef89cc9aad7d43484906b562734c241a70d7c4eec57"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.225866 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" event={"ID":"f648600b-b3cf-4360-97e9-91a7b33ca283","Type":"ContainerStarted","Data":"ac494dcf8fa364bb3e78f816600b2c8e1395a751fb6ad33b26aff9ef4777ae4d"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.226651 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.238756 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podStartSLOduration=114.238741946 podStartE2EDuration="1m54.238741946s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.237730817 +0000 UTC m=+138.619007645" watchObservedRunningTime="2026-02-16 00:11:02.238741946 +0000 UTC m=+138.620018764" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.248752 5114 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-9kq9m container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.248817 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" podUID="f648600b-b3cf-4360-97e9-91a7b33ca283" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.249440 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" event={"ID":"44a23ff1-70d4-4f26-b405-486ec014bf36","Type":"ContainerStarted","Data":"3d2146749faff1925c75acc85b4e9b67ffce95af6edb3395508a07f74aa8514d"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.275667 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" event={"ID":"89ae73bd-df87-4388-876a-2ed38972eb2b","Type":"ContainerStarted","Data":"8234ed0a774fea7f2c2c98048b86044244c08173788a990921091d28b26a05d4"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.275716 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.277549 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" podStartSLOduration=114.27753106 podStartE2EDuration="1m54.27753106s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.276190581 +0000 UTC m=+138.657467399" watchObservedRunningTime="2026-02-16 00:11:02.27753106 +0000 UTC m=+138.658807878" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.289345 5114 patch_prober.go:28] interesting pod/console-operator-67c89758df-sl2nf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.289399 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" podUID="89ae73bd-df87-4388-876a-2ed38972eb2b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.297570 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.299082 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.799050613 +0000 UTC m=+139.180327431 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.300374 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.305055 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.805040967 +0000 UTC m=+139.186317785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.310626 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" event={"ID":"dfbe9d8e-db99-404d-ba9d-d173ab3b6434","Type":"ContainerStarted","Data":"d8d8baf0a43d60a2f69ccc2010dc937815cbca507af105256e6b1ab7d6960b36"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.311093 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-shgmb" podStartSLOduration=114.311079472 podStartE2EDuration="1m54.311079472s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.309456955 +0000 UTC m=+138.690733773" watchObservedRunningTime="2026-02-16 00:11:02.311079472 +0000 UTC m=+138.692356290" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.318538 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" event={"ID":"8d81cb10-abbd-4c04-9632-446be1e89c2b","Type":"ContainerStarted","Data":"815a9e1b5d1248674a5d37c6683c25ee9f6552d837d8a4abfb2c9d2098cb7f27"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.327388 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36136: no serving certificate available for the kubelet" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.346330 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" event={"ID":"cd5244de-0460-4f31-914d-85541d3c975f","Type":"ContainerStarted","Data":"51221bd61f2c654142b34326180c3fdb3a7ec76e7cc922d3dd741fdc864b938b"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.383842 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" event={"ID":"289e41c8-1dae-4739-a9a5-41f112254197","Type":"ContainerStarted","Data":"7cd858d037379ca24052f12ee7e2553310fd586410489279b60578915acc052b"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.394424 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-xdwmj" podStartSLOduration=114.394408967 podStartE2EDuration="1m54.394408967s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.352589465 +0000 UTC m=+138.733866293" watchObservedRunningTime="2026-02-16 00:11:02.394408967 +0000 UTC m=+138.775685785" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.397153 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-l8qvm" podStartSLOduration=115.397144716 podStartE2EDuration="1m55.397144716s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.39280968 +0000 UTC m=+138.774086498" watchObservedRunningTime="2026-02-16 00:11:02.397144716 +0000 UTC m=+138.778421534" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.411943 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.412193 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.912178292 +0000 UTC m=+139.293455100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.412287 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.412522 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:02.912515271 +0000 UTC m=+139.293792089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.439568 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" event={"ID":"c0423bba-777b-4bd6-bef4-f126cc68f884","Type":"ContainerStarted","Data":"21f2d4f8ada68dfdaefabd34acc09c5d89ec1ca8239ee7b7e9e4725a10310852"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.474901 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zffmj" event={"ID":"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9","Type":"ContainerStarted","Data":"c93ebbef69c8acce83b503286101ebad3c44d1dba939c5d90e22cade3366ccfe"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.488021 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" podStartSLOduration=115.487998929 podStartE2EDuration="1m55.487998929s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.477880936 +0000 UTC m=+138.859157754" watchObservedRunningTime="2026-02-16 00:11:02.487998929 +0000 UTC m=+138.869275747" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.514231 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.515172 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.015123955 +0000 UTC m=+139.396400763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.515408 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.516517 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.016500605 +0000 UTC m=+139.397777423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.518491 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29520000-tmdgt" podStartSLOduration=115.518479572 podStartE2EDuration="1m55.518479572s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.512175339 +0000 UTC m=+138.893452157" watchObservedRunningTime="2026-02-16 00:11:02.518479572 +0000 UTC m=+138.899756390" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.519712 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" event={"ID":"4f2c237a-0f7f-4dd6-a35c-6533fbc3522e","Type":"ContainerStarted","Data":"c5309c0cd976a7ff8ceb89c015d5fd3bce0a736c03687929c61e5e481583fed0"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.542031 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" event={"ID":"144852dc-946d-4a33-8453-c3d5bb49127d","Type":"ContainerStarted","Data":"b0944797a88d66ed8cc4e6135707089e458e18981ae551bcf583fb87e6aabb3c"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.579139 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" event={"ID":"c476e668-a97b-4ce6-9eb1-d278b804cf1d","Type":"ContainerStarted","Data":"c3a9d314420a41aadffebb8dcf01c941b75bb9116ab04020d35a028d899a29af"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.579173 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" event={"ID":"c476e668-a97b-4ce6-9eb1-d278b804cf1d","Type":"ContainerStarted","Data":"ec12a4fbfd1c493d045121dcd8c98d5dde5da489a76bf0c8048217d1e8874789"} Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.579944 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-x9wkk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.579978 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x9wkk" podUID="f47442a6-b454-45d5-8094-794e063f573d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.598457 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-5n27w" podStartSLOduration=114.598440509 podStartE2EDuration="1m54.598440509s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.595628927 +0000 UTC m=+138.976905745" watchObservedRunningTime="2026-02-16 00:11:02.598440509 +0000 UTC m=+138.979717327" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.599707 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" podStartSLOduration=114.599700245 podStartE2EDuration="1m54.599700245s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.55636989 +0000 UTC m=+138.937646708" watchObservedRunningTime="2026-02-16 00:11:02.599700245 +0000 UTC m=+138.980977063" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.621176 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.621316 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.121291581 +0000 UTC m=+139.502568399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.621665 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.622964 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.122954009 +0000 UTC m=+139.504230917 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.633361 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" podStartSLOduration=115.63334441 podStartE2EDuration="1m55.63334441s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.631878528 +0000 UTC m=+139.013155346" watchObservedRunningTime="2026-02-16 00:11:02.63334441 +0000 UTC m=+139.014621228" Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.722759 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.724503 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.224482331 +0000 UTC m=+139.605759139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.827127 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.831020 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.331002888 +0000 UTC m=+139.712279706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:02 crc kubenswrapper[5114]: I0216 00:11:02.935762 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:02 crc kubenswrapper[5114]: E0216 00:11:02.936332 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.436306639 +0000 UTC m=+139.817583447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.024719 5114 ???:1] "http: TLS handshake error from 192.168.126.11:36148: no serving certificate available for the kubelet" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.037649 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.038060 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.538045718 +0000 UTC m=+139.919322536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.039714 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.041369 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.041635 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.139838 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.139993 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.639975321 +0000 UTC m=+140.021252139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.140155 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.140549 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.640541228 +0000 UTC m=+140.021818046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.242420 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.242935 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.742902234 +0000 UTC m=+140.124179052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.343933 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.344308 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.844295262 +0000 UTC m=+140.225572080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.444928 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.445417 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:03.945402131 +0000 UTC m=+140.326678949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.547378 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.547970 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.047948783 +0000 UTC m=+140.429225641 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.648897 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.649761 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.149733262 +0000 UTC m=+140.531010120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.662973 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.663060 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.694333 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.698528 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" event={"ID":"8d81cb10-abbd-4c04-9632-446be1e89c2b","Type":"ContainerStarted","Data":"fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.702223 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.737979 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" event={"ID":"cd5244de-0460-4f31-914d-85541d3c975f","Type":"ContainerStarted","Data":"3a94d03c528d81f9592c3a49dec5a24ad82eb7659cc89fa7a9cbd80c27383eb8"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.744726 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-m5g99" podStartSLOduration=116.744708144 podStartE2EDuration="1m56.744708144s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:02.675762659 +0000 UTC m=+139.057039487" watchObservedRunningTime="2026-02-16 00:11:03.744708144 +0000 UTC m=+140.125984962" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.766782 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.770746 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.771444 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.271427579 +0000 UTC m=+140.652704397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.778962 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" event={"ID":"289e41c8-1dae-4739-a9a5-41f112254197","Type":"ContainerStarted","Data":"1beb346d17f80c96d91d7b1ec61bbae00744b2d02e732a09d8fed8b28de4a671"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.797312 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" podStartSLOduration=7.797270707 podStartE2EDuration="7.797270707s" podCreationTimestamp="2026-02-16 00:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:03.791235223 +0000 UTC m=+140.172512041" watchObservedRunningTime="2026-02-16 00:11:03.797270707 +0000 UTC m=+140.178547525" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.798140 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" event={"ID":"52fcb5f2-d1d1-45d2-ba98-8619492efe7f","Type":"ContainerStarted","Data":"58e58a205b10f32c58dcb711c16bcfe3c0043601f837f5075ade05e49ba0600a"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.799914 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" event={"ID":"c0423bba-777b-4bd6-bef4-f126cc68f884","Type":"ContainerStarted","Data":"09fe5e589effa331d2ebb2c01dea2259e73f549e4e047d0026ad215d8acb1a5a"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.876187 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.881219 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-t657p" event={"ID":"6e8f4d24-5c9f-4a63-8909-f38807a68a86","Type":"ContainerStarted","Data":"c87735a51ed752037ad8feec985968a82f2846de0fedefb7358c2034136048ef"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.881307 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" event={"ID":"8916fc5f-e3fa-4e47-af78-923d1cd35984","Type":"ContainerStarted","Data":"806dd4986f8fcb73a323e6da0689ff9b1edc169bd45816d2e9de37cb7b21268a"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.881332 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" event={"ID":"5f84bfa8-7177-4705-8591-f4e33059d290","Type":"ContainerStarted","Data":"b53c2d6d00477dadd171eb19f58519372034c77071ae315c1923a932adceae65"} Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.881566 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.381531399 +0000 UTC m=+140.762808227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.897783 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-btwkm" event={"ID":"8e690d2a-4d5a-4d38-bf04-fe6951258527","Type":"ContainerStarted","Data":"d5c3c4e1a62c1b5f53154f3ada80ba0a9f221c03dfdd314af00a84d997c0f13e"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.904048 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-cpqbw" podStartSLOduration=115.904029661 podStartE2EDuration="1m55.904029661s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:03.903772874 +0000 UTC m=+140.285049692" watchObservedRunningTime="2026-02-16 00:11:03.904029661 +0000 UTC m=+140.285306479" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.907499 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" event={"ID":"98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d","Type":"ContainerStarted","Data":"9fe533d10f786fbc88e8f536861d82d6be9d15baa154c405c1308536eab06fd6"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.952515 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-nrsjt" event={"ID":"ef094cac-bbf6-4a7b-9549-724b916baf0e","Type":"ContainerStarted","Data":"2294c38c2cf86a43719dfdab0f8f89d58c0a283645bdda7d360776c3d7b6f8af"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.964695 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-9nrhq" podStartSLOduration=115.964678859 podStartE2EDuration="1m55.964678859s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:03.963806763 +0000 UTC m=+140.345083581" watchObservedRunningTime="2026-02-16 00:11:03.964678859 +0000 UTC m=+140.345955677" Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.990524 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" event={"ID":"32f15e1a-44ae-483f-8b19-d92afee5fdcc","Type":"ContainerStarted","Data":"a678ab5c130ce312dc84a9f9a7ac9c884e1efa1fff43f06f007aeb056a68acd3"} Feb 16 00:11:03 crc kubenswrapper[5114]: I0216 00:11:03.994891 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:03 crc kubenswrapper[5114]: E0216 00:11:03.999481 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.499461546 +0000 UTC m=+140.880738364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.028531 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" event={"ID":"4105502f-c677-4389-9d65-126fd4126663","Type":"ContainerStarted","Data":"02e7a297f77b4e8871628fd202536d78c5777909613332510f01d363bb0a7f71"} Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.041010 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.041079 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.051256 5114 generic.go:358] "Generic (PLEG): container finished" podID="e5d36493-e813-44ad-9206-003a1ed39135" containerID="180c97ec3ead7678a27042106f9037b318992c9528212513e56eb41c17d69498" exitCode=0 Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.051386 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" event={"ID":"e5d36493-e813-44ad-9206-003a1ed39135","Type":"ContainerDied","Data":"180c97ec3ead7678a27042106f9037b318992c9528212513e56eb41c17d69498"} Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.069051 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-5w595" event={"ID":"e690b054-0cba-4297-8a2b-c926b456a057","Type":"ContainerStarted","Data":"d8316bd695e6664ac1a543da4fe74ee43a787963955312d8e6da480281c8e2cf"} Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.075633 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" event={"ID":"e63eb2b4-7d50-4fa4-b866-2a07239fda8e","Type":"ContainerStarted","Data":"158185241adc16ec3c98fd2add48f514a48fef1a650ea96acf9bde365ce4dacd"} Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.097112 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.097406 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.597386744 +0000 UTC m=+140.978663572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.110393 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-n7nf8"] Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.198657 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.199191 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.699167743 +0000 UTC m=+141.080444731 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.299786 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.300000 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.799965494 +0000 UTC m=+141.181242312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.300462 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.300871 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.80084531 +0000 UTC m=+141.182122208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.390765 5114 ???:1] "http: TLS handshake error from 192.168.126.11:48268: no serving certificate available for the kubelet" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.396032 5114 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-9kq9m container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.396121 5114 patch_prober.go:28] interesting pod/console-operator-67c89758df-sl2nf container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.396166 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" podUID="f648600b-b3cf-4360-97e9-91a7b33ca283" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.396191 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" podUID="89ae73bd-df87-4388-876a-2ed38972eb2b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.401228 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.402704 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.902677071 +0000 UTC m=+141.283953889 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.404741 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.407985 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:04.907958424 +0000 UTC m=+141.289235242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.422777 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-8x4kb" podStartSLOduration=116.422760222 podStartE2EDuration="1m56.422760222s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:04.420825016 +0000 UTC m=+140.802101854" watchObservedRunningTime="2026-02-16 00:11:04.422760222 +0000 UTC m=+140.804037040" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.423189 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-7n2z7" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.423317 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-nrsjt" podStartSLOduration=116.423311558 podStartE2EDuration="1m56.423311558s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:04.382276679 +0000 UTC m=+140.763553507" watchObservedRunningTime="2026-02-16 00:11:04.423311558 +0000 UTC m=+140.804588376" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.462156 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" podStartSLOduration=117.462138404 podStartE2EDuration="1m57.462138404s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:04.461374591 +0000 UTC m=+140.842651429" watchObservedRunningTime="2026-02-16 00:11:04.462138404 +0000 UTC m=+140.843415222" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.507289 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.507455 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.007426746 +0000 UTC m=+141.388703794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.508199 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.511190 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.011173185 +0000 UTC m=+141.392450003 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.522527 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" podStartSLOduration=116.522506973 podStartE2EDuration="1m56.522506973s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:04.494906933 +0000 UTC m=+140.876183761" watchObservedRunningTime="2026-02-16 00:11:04.522506973 +0000 UTC m=+140.903783791" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.551993 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-5w595" podStartSLOduration=7.551978687 podStartE2EDuration="7.551978687s" podCreationTimestamp="2026-02-16 00:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:04.549793254 +0000 UTC m=+140.931070072" watchObservedRunningTime="2026-02-16 00:11:04.551978687 +0000 UTC m=+140.933255505" Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.622687 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.623619 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.123596522 +0000 UTC m=+141.504873340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.724430 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.724797 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.224782475 +0000 UTC m=+141.606059293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.825643 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.826100 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.3260806 +0000 UTC m=+141.707357418 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:04 crc kubenswrapper[5114]: I0216 00:11:04.927396 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:04 crc kubenswrapper[5114]: E0216 00:11:04.927958 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.427908781 +0000 UTC m=+141.809185599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.029263 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.029608 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.529587517 +0000 UTC m=+141.910864335 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.044460 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:11:05 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Feb 16 00:11:05 crc kubenswrapper[5114]: [+]process-running ok Feb 16 00:11:05 crc kubenswrapper[5114]: healthz check failed Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.044539 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.083235 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" event={"ID":"c0423bba-777b-4bd6-bef4-f126cc68f884","Type":"ContainerStarted","Data":"78fe8f3dfa3eefe8b282202ae6359da512cbb2453c4aa2c6781825e441451008"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.084219 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.088059 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" event={"ID":"144852dc-946d-4a33-8453-c3d5bb49127d","Type":"ContainerStarted","Data":"58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.088879 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.090066 5114 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-crpbt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.090103 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" podUID="144852dc-946d-4a33-8453-c3d5bb49127d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.091317 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-hkwvd" event={"ID":"bde0f080-6423-454b-b0b5-30b9ee95e15e","Type":"ContainerStarted","Data":"ada01b745a3f7b907a15199dafe5defa160a1abd50eb5d95830a9378fd642ad4"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.095262 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" event={"ID":"762099f7-c3ba-482a-9910-765d1abc7388","Type":"ContainerStarted","Data":"ea7243189de2770d378d6026fcdad98f7dcb04c553c7efcbb0b9ce07061b6d5a"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.095494 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.097464 5114 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-nvr4r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.097514 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" podUID="762099f7-c3ba-482a-9910-765d1abc7388" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.100987 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" event={"ID":"5f84bfa8-7177-4705-8591-f4e33059d290","Type":"ContainerStarted","Data":"37015ca501e90a95149fa97434c52244e34b2465bec9728e961725d2b6e5feac"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.102769 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" event={"ID":"98b9aa0f-f65f-4bf7-8c09-dfb432cfc00d","Type":"ContainerStarted","Data":"7e5bac119a5e1f4dfb8e185db1cfeabfd21a4de2c0d16fcdce6f423b07ae5cb4"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.103780 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" event={"ID":"7a94ef71-d05f-4af7-b557-e3c034866f73","Type":"ContainerStarted","Data":"0167b7f7bb9f81b1a46d0b22237311a9d7ce7c1a704347c2c23320b563dcf888"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.106298 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h8c98" event={"ID":"69d65fd4-cf6c-4743-bb16-57d591424ffb","Type":"ContainerStarted","Data":"00dc48970ba70f3b558f5f7c114e280511e2430904600e4c4d58678ba28d36ab"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.106340 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-h8c98" event={"ID":"69d65fd4-cf6c-4743-bb16-57d591424ffb","Type":"ContainerStarted","Data":"63a3acb56b23165d1777cf3cc7510fdc6c8d7e864bcc213910e8dbb78e77e5c0"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.107484 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-h8c98" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.109427 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" event={"ID":"57cff053-a179-4f6a-a38f-ddee39ec6c0b","Type":"ContainerStarted","Data":"3b31c88d7898042878a5ee4dcc62d46f143a8c62dd620a3195ab2f31d413cfb8"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.111645 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" event={"ID":"e5d36493-e813-44ad-9206-003a1ed39135","Type":"ContainerStarted","Data":"122456c227cd363e4287078d349c566548c1d8f8c8fce143399a7340e5d2c261"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.112353 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.113788 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" event={"ID":"25c871eb-063b-4177-b300-f3280f9f7c6a","Type":"ContainerStarted","Data":"6644652cf5312cf6b60970079d8331a582654a9105e03cc9b822a229b99fd81b"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.124287 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" event={"ID":"44a23ff1-70d4-4f26-b405-486ec014bf36","Type":"ContainerStarted","Data":"0e84beac506ca4b462a88da7851be6150193d0b7430c305e5bb90c91a78641a3"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.132174 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.137311 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.637289788 +0000 UTC m=+142.018566606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.156995 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" podStartSLOduration=117.156974428 podStartE2EDuration="1m57.156974428s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.124631041 +0000 UTC m=+141.505907859" watchObservedRunningTime="2026-02-16 00:11:05.156974428 +0000 UTC m=+141.538251246" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.158625 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" event={"ID":"dfbe9d8e-db99-404d-ba9d-d173ab3b6434","Type":"ContainerStarted","Data":"b8e151d169185ec77217e931ae7208bb81e0bad786d521a1d67d87c5196e9a61"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.158659 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" event={"ID":"dfbe9d8e-db99-404d-ba9d-d173ab3b6434","Type":"ContainerStarted","Data":"5d1617021ad8cc2cca4477b6671956a5a18ba32b2abc08b62ab52cec3ffb63f8"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.177038 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" podStartSLOduration=118.177022559 podStartE2EDuration="1m58.177022559s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.176646448 +0000 UTC m=+141.557923266" watchObservedRunningTime="2026-02-16 00:11:05.177022559 +0000 UTC m=+141.558299377" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.203138 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" event={"ID":"289e41c8-1dae-4739-a9a5-41f112254197","Type":"ContainerStarted","Data":"1b927d35daebfa4d91d1fba7421513b6d9a0d7d30900af89fdeef9232ebbb744"} Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.203576 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" podStartSLOduration=117.203561738 podStartE2EDuration="1m57.203561738s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.202537149 +0000 UTC m=+141.583813957" watchObservedRunningTime="2026-02-16 00:11:05.203561738 +0000 UTC m=+141.584838556" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.206494 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.210148 5114 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-rswb4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.210195 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" podUID="e63eb2b4-7d50-4fa4-b866-2a07239fda8e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.225828 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-fj6tq" podStartSLOduration=117.225814593 podStartE2EDuration="1m57.225814593s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.224871116 +0000 UTC m=+141.606147924" watchObservedRunningTime="2026-02-16 00:11:05.225814593 +0000 UTC m=+141.607091411" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.234195 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-9kq9m" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.234591 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.236271 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.736237025 +0000 UTC m=+142.117513843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.284874 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-h8c98" podStartSLOduration=9.284856734 podStartE2EDuration="9.284856734s" podCreationTimestamp="2026-02-16 00:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.248185961 +0000 UTC m=+141.629462779" watchObservedRunningTime="2026-02-16 00:11:05.284856734 +0000 UTC m=+141.666133552" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.312917 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" podStartSLOduration=118.312904837 podStartE2EDuration="1m58.312904837s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.311855226 +0000 UTC m=+141.693132044" watchObservedRunningTime="2026-02-16 00:11:05.312904837 +0000 UTC m=+141.694181655" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.314594 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-qqb9h" podStartSLOduration=118.314587756 podStartE2EDuration="1m58.314587756s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.285516103 +0000 UTC m=+141.666792911" watchObservedRunningTime="2026-02-16 00:11:05.314587756 +0000 UTC m=+141.695864574" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.341183 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.341750 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.841725692 +0000 UTC m=+142.223002680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.363018 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-hkwvd" podStartSLOduration=9.363001859 podStartE2EDuration="9.363001859s" podCreationTimestamp="2026-02-16 00:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.334888264 +0000 UTC m=+141.716165082" watchObservedRunningTime="2026-02-16 00:11:05.363001859 +0000 UTC m=+141.744278677" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.397735 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-tdk6q" podStartSLOduration=117.397698984 podStartE2EDuration="1m57.397698984s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.363763711 +0000 UTC m=+141.745040529" watchObservedRunningTime="2026-02-16 00:11:05.397698984 +0000 UTC m=+141.778975812" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.399675 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" podStartSLOduration=117.399663151 podStartE2EDuration="1m57.399663151s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.398020043 +0000 UTC m=+141.779296881" watchObservedRunningTime="2026-02-16 00:11:05.399663151 +0000 UTC m=+141.780939989" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.442703 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.443048 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:05.943030878 +0000 UTC m=+142.324307696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.468822 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-hsddz" podStartSLOduration=117.468791654 podStartE2EDuration="1m57.468791654s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.468045302 +0000 UTC m=+141.849322140" watchObservedRunningTime="2026-02-16 00:11:05.468791654 +0000 UTC m=+141.850068472" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.469434 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-5wc8p" podStartSLOduration=117.469427572 podStartE2EDuration="1m57.469427572s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.443894503 +0000 UTC m=+141.825171321" watchObservedRunningTime="2026-02-16 00:11:05.469427572 +0000 UTC m=+141.850704390" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.491599 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-rnn26" podStartSLOduration=117.491584695 podStartE2EDuration="1m57.491584695s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.490272336 +0000 UTC m=+141.871549154" watchObservedRunningTime="2026-02-16 00:11:05.491584695 +0000 UTC m=+141.872861523" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.539455 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-7qhtw" podStartSLOduration=117.539230365 podStartE2EDuration="1m57.539230365s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.514735365 +0000 UTC m=+141.896012183" watchObservedRunningTime="2026-02-16 00:11:05.539230365 +0000 UTC m=+141.920507183" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.544576 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.544904 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.044889949 +0000 UTC m=+142.426166767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.574404 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-42lxx" podStartSLOduration=117.574389934 podStartE2EDuration="1m57.574389934s" podCreationTimestamp="2026-02-16 00:09:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:05.539669628 +0000 UTC m=+141.920946446" watchObservedRunningTime="2026-02-16 00:11:05.574389934 +0000 UTC m=+141.955666752" Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.646086 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.646911 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.146894865 +0000 UTC m=+142.528171683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.749099 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.749572 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.24954987 +0000 UTC m=+142.630826838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.850571 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.850833 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.350805424 +0000 UTC m=+142.732082242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.851007 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.851316 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.351302438 +0000 UTC m=+142.732579256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:05 crc kubenswrapper[5114]: I0216 00:11:05.951735 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:05 crc kubenswrapper[5114]: E0216 00:11:05.952359 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.452336626 +0000 UTC m=+142.833613444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.043997 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:11:06 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Feb 16 00:11:06 crc kubenswrapper[5114]: [+]process-running ok Feb 16 00:11:06 crc kubenswrapper[5114]: healthz check failed Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.044113 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.054025 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.054525 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.554509087 +0000 UTC m=+142.935785905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.154778 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.155187 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.655153993 +0000 UTC m=+143.036430811 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.236125 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" podUID="8d81cb10-abbd-4c04-9632-446be1e89c2b" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" gracePeriod=30 Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.238415 5114 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-rswb4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.238475 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" podUID="e63eb2b4-7d50-4fa4-b866-2a07239fda8e" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/healthz\": dial tcp 10.217.0.29:8443: connect: connection refused" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.248960 5114 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-nvr4r container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.249038 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" podUID="762099f7-c3ba-482a-9910-765d1abc7388" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.257209 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.273663 5114 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-crpbt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.273736 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" podUID="144852dc-946d-4a33-8453-c3d5bb49127d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.274482 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.77445824 +0000 UTC m=+143.155735058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.372191 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.372514 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.87245056 +0000 UTC m=+143.253727388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.373951 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.381098 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.88107767 +0000 UTC m=+143.262354488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.474846 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.474979 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.97495891 +0000 UTC m=+143.356235728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.475166 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.475418 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:06.975410213 +0000 UTC m=+143.356687031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.543625 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.576603 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.576797 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.07676955 +0000 UTC m=+143.458046368 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.577174 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.577589 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.077573464 +0000 UTC m=+143.458850322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.580069 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.580231 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.582559 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.588986 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.678225 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.678336 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.178317883 +0000 UTC m=+143.559594701 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.678810 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c61cbc92-a845-41fd-915f-daa5eb2e7344-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"c61cbc92-a845-41fd-915f-daa5eb2e7344\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.678854 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.678872 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c61cbc92-a845-41fd-915f-daa5eb2e7344-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"c61cbc92-a845-41fd-915f-daa5eb2e7344\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.679117 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.179110226 +0000 UTC m=+143.560387044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.786811 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.787090 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.287039773 +0000 UTC m=+143.668316591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.787810 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c61cbc92-a845-41fd-915f-daa5eb2e7344-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"c61cbc92-a845-41fd-915f-daa5eb2e7344\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.787915 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.787940 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c61cbc92-a845-41fd-915f-daa5eb2e7344-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"c61cbc92-a845-41fd-915f-daa5eb2e7344\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.787935 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c61cbc92-a845-41fd-915f-daa5eb2e7344-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"c61cbc92-a845-41fd-915f-daa5eb2e7344\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.788350 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.288328491 +0000 UTC m=+143.669605309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.828499 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c61cbc92-a845-41fd-915f-daa5eb2e7344-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"c61cbc92-a845-41fd-915f-daa5eb2e7344\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.891691 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.892493 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.392467178 +0000 UTC m=+143.773743996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.899928 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:06 crc kubenswrapper[5114]: I0216 00:11:06.993839 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:06 crc kubenswrapper[5114]: E0216 00:11:06.994335 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.494317309 +0000 UTC m=+143.875594127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.032128 5114 ???:1] "http: TLS handshake error from 192.168.126.11:48280: no serving certificate available for the kubelet" Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.046558 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:11:07 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Feb 16 00:11:07 crc kubenswrapper[5114]: [+]process-running ok Feb 16 00:11:07 crc kubenswrapper[5114]: healthz check failed Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.046664 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.095266 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.095785 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.595755858 +0000 UTC m=+143.977032676 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.199075 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.199875 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.699836354 +0000 UTC m=+144.081113552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.270875 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zffmj" event={"ID":"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9","Type":"ContainerStarted","Data":"5c9d67f43c4fe0414a3e1f0e8c0f61ed2c7ff3aed320494a1a3074e47d69e00e"} Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.276735 5114 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-crpbt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" start-of-body= Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.276823 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" podUID="144852dc-946d-4a33-8453-c3d5bb49127d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.39:8080/healthz\": dial tcp 10.217.0.39:8080: connect: connection refused" Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.280440 5114 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-d8d6z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.280505 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" podUID="e5d36493-e813-44ad-9206-003a1ed39135" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.301724 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.302369 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.802339455 +0000 UTC m=+144.183616273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.338331 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.404041 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.405815 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:07.905801843 +0000 UTC m=+144.287078661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.506209 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.506402 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.006366907 +0000 UTC m=+144.387643725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.506902 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.507472 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.007463648 +0000 UTC m=+144.388740466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.608260 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.608405 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.108362062 +0000 UTC m=+144.489638880 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.608535 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.608998 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.10897952 +0000 UTC m=+144.490256338 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.710288 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.710510 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.210479151 +0000 UTC m=+144.591755979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.712456 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.212440348 +0000 UTC m=+144.593717166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.711990 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.814067 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.814338 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.31431877 +0000 UTC m=+144.695595588 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.814398 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.814712 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.314694081 +0000 UTC m=+144.695970909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.915581 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.915813 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.41578361 +0000 UTC m=+144.797060428 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:07 crc kubenswrapper[5114]: I0216 00:11:07.915972 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:07 crc kubenswrapper[5114]: E0216 00:11:07.916322 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.416308646 +0000 UTC m=+144.797585464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.017380 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.017663 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.517648742 +0000 UTC m=+144.898925560 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.044682 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:11:08 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Feb 16 00:11:08 crc kubenswrapper[5114]: [+]process-running ok Feb 16 00:11:08 crc kubenswrapper[5114]: healthz check failed Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.044792 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.119997 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.120444 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.62042331 +0000 UTC m=+145.001700118 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.221120 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.221700 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.721656474 +0000 UTC m=+145.102933292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.278460 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"c61cbc92-a845-41fd-915f-daa5eb2e7344","Type":"ContainerStarted","Data":"b032432b9f337bb0f1d2344512b6398e97ccb40ca95349dc844725455ce1e99b"} Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.278538 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"c61cbc92-a845-41fd-915f-daa5eb2e7344","Type":"ContainerStarted","Data":"a3ff4613054459e43418dc0203feebc289f6a97666211afe58693bc5d8bbc94e"} Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.318747 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.318718946 podStartE2EDuration="2.318718946s" podCreationTimestamp="2026-02-16 00:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:08.318018596 +0000 UTC m=+144.699295414" watchObservedRunningTime="2026-02-16 00:11:08.318718946 +0000 UTC m=+144.699995764" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.322830 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.323264 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.823228237 +0000 UTC m=+145.204505055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.424321 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.424502 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:08.924474441 +0000 UTC m=+145.305751259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.518728 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.518873 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.525993 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.526386 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.026373324 +0000 UTC m=+145.407650142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.537038 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.544664 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-x9wkk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.544749 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-x9wkk" podUID="f47442a6-b454-45d5-8094-794e063f573d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.583801 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-llmwl"] Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.628266 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.630775 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.130734798 +0000 UTC m=+145.512011626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.705946 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llmwl"] Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.706055 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.708299 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.731086 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.731463 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.231449136 +0000 UTC m=+145.612725954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.758471 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9w976"] Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.772313 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.774873 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9w976"] Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.776922 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.832734 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.832983 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.332941647 +0000 UTC m=+145.714218465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.833195 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-utilities\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.833261 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jdc4\" (UniqueName: \"kubernetes.io/projected/d846f09e-4870-4305-857c-b47bbe247686-kube-api-access-4jdc4\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.833436 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.833827 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-catalog-content\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.833887 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.333870394 +0000 UTC m=+145.715147212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.934964 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.935173 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.435144039 +0000 UTC m=+145.816420867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.935305 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-catalog-content\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.935479 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-utilities\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.935565 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-catalog-content\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.935672 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-utilities\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.935704 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4jdc4\" (UniqueName: \"kubernetes.io/projected/d846f09e-4870-4305-857c-b47bbe247686-kube-api-access-4jdc4\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.935732 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.935768 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8f2h\" (UniqueName: \"kubernetes.io/projected/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-kube-api-access-w8f2h\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.936042 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-catalog-content\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.936091 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-utilities\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: E0216 00:11:08.936113 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.436103237 +0000 UTC m=+145.817380055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.956308 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jdc4\" (UniqueName: \"kubernetes.io/projected/d846f09e-4870-4305-857c-b47bbe247686-kube-api-access-4jdc4\") pod \"community-operators-llmwl\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.958980 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-74kpp"] Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.965070 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:08 crc kubenswrapper[5114]: I0216 00:11:08.969092 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-74kpp"] Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.036572 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.036870 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.037657 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.536775864 +0000 UTC m=+145.918052682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.037854 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-utilities\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.037932 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq7tj\" (UniqueName: \"kubernetes.io/projected/0d296a72-b033-40b3-8652-128687b79c8e-kube-api-access-qq7tj\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.038002 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.038037 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8f2h\" (UniqueName: \"kubernetes.io/projected/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-kube-api-access-w8f2h\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.038176 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-catalog-content\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.038263 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-catalog-content\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.038304 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-utilities\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.038370 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.538352039 +0000 UTC m=+145.919628857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.038416 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-utilities\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.038716 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-catalog-content\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.042054 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:11:09 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Feb 16 00:11:09 crc kubenswrapper[5114]: [+]process-running ok Feb 16 00:11:09 crc kubenswrapper[5114]: healthz check failed Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.042107 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.070364 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8f2h\" (UniqueName: \"kubernetes.io/projected/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-kube-api-access-w8f2h\") pod \"certified-operators-9w976\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.088051 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.139475 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.139677 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qq7tj\" (UniqueName: \"kubernetes.io/projected/0d296a72-b033-40b3-8652-128687b79c8e-kube-api-access-qq7tj\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.139778 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-catalog-content\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.139815 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-utilities\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.140284 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-utilities\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.140356 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.640339835 +0000 UTC m=+146.021616653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.140845 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-catalog-content\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.159842 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq7tj\" (UniqueName: \"kubernetes.io/projected/0d296a72-b033-40b3-8652-128687b79c8e-kube-api-access-qq7tj\") pod \"community-operators-74kpp\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.187504 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xkj8d"] Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.240787 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.241368 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.741349362 +0000 UTC m=+146.122626180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.312276 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkj8d"] Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.312428 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.314547 5114 generic.go:358] "Generic (PLEG): container finished" podID="c61cbc92-a845-41fd-915f-daa5eb2e7344" containerID="b032432b9f337bb0f1d2344512b6398e97ccb40ca95349dc844725455ce1e99b" exitCode=0 Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.314714 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"c61cbc92-a845-41fd-915f-daa5eb2e7344","Type":"ContainerDied","Data":"b032432b9f337bb0f1d2344512b6398e97ccb40ca95349dc844725455ce1e99b"} Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.319839 5114 generic.go:358] "Generic (PLEG): container finished" podID="25c871eb-063b-4177-b300-f3280f9f7c6a" containerID="6644652cf5312cf6b60970079d8331a582654a9105e03cc9b822a229b99fd81b" exitCode=0 Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.321022 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" event={"ID":"25c871eb-063b-4177-b300-f3280f9f7c6a","Type":"ContainerDied","Data":"6644652cf5312cf6b60970079d8331a582654a9105e03cc9b822a229b99fd81b"} Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.328237 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-nhfsj" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.328482 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.356609 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.356940 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.856912281 +0000 UTC m=+146.238189099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.462470 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-utilities\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.463529 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-catalog-content\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.463647 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p22gz\" (UniqueName: \"kubernetes.io/projected/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-kube-api-access-p22gz\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.463717 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.464084 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:09.964063365 +0000 UTC m=+146.345340183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.550403 5114 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-d8d6z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" start-of-body= Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.550513 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" podUID="e5d36493-e813-44ad-9206-003a1ed39135" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": context deadline exceeded" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.564968 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.565221 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-utilities\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.565284 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-catalog-content\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.565313 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p22gz\" (UniqueName: \"kubernetes.io/projected/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-kube-api-access-p22gz\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.565726 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.065709881 +0000 UTC m=+146.446986699 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.566128 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-utilities\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.566365 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-catalog-content\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.569003 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.569109 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.579652 5114 patch_prober.go:28] interesting pod/console-64d44f6ddf-l8qvm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.579716 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-l8qvm" podUID="d7dc7990-5b90-402e-b2bc-53d94e232af4" containerName="console" probeResult="failure" output="Get \"https://10.217.0.33:8443/health\": dial tcp 10.217.0.33:8443: connect: connection refused" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.595500 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-llmwl"] Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.610753 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p22gz\" (UniqueName: \"kubernetes.io/projected/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-kube-api-access-p22gz\") pod \"certified-operators-xkj8d\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: W0216 00:11:09.634926 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd846f09e_4870_4305_857c_b47bbe247686.slice/crio-140f8ba1f8fbef4aaba6d1dbbcd0e746a4eeaa7fe7a598e72f5681fd1e263a1c WatchSource:0}: Error finding container 140f8ba1f8fbef4aaba6d1dbbcd0e746a4eeaa7fe7a598e72f5681fd1e263a1c: Status 404 returned error can't find the container with id 140f8ba1f8fbef4aaba6d1dbbcd0e746a4eeaa7fe7a598e72f5681fd1e263a1c Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.647999 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.655904 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9w976"] Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.668862 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.670699 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.170369094 +0000 UTC m=+146.551645912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: W0216 00:11:09.675435 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod35d79a09_4a13_4f64_b2ef_f7061b82f1f9.slice/crio-7e93191f9d6f8833a097f9d20745fdda23848b0bef8896105ba0e82d9fa736d2 WatchSource:0}: Error finding container 7e93191f9d6f8833a097f9d20745fdda23848b0bef8896105ba0e82d9fa736d2: Status 404 returned error can't find the container with id 7e93191f9d6f8833a097f9d20745fdda23848b0bef8896105ba0e82d9fa736d2 Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.769791 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.770001 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.269957 +0000 UTC m=+146.651233818 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.770233 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.770561 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.270549237 +0000 UTC m=+146.651826055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.778036 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-74kpp"] Feb 16 00:11:09 crc kubenswrapper[5114]: W0216 00:11:09.782835 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d296a72_b033_40b3_8652_128687b79c8e.slice/crio-49c6fc638881f276c49080b60c0a211d47ea71819f40efceac96c7582a614544 WatchSource:0}: Error finding container 49c6fc638881f276c49080b60c0a211d47ea71819f40efceac96c7582a614544: Status 404 returned error can't find the container with id 49c6fc638881f276c49080b60c0a211d47ea71819f40efceac96c7582a614544 Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.871880 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.872221 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.372201762 +0000 UTC m=+146.753478580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.958096 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xkj8d"] Feb 16 00:11:09 crc kubenswrapper[5114]: W0216 00:11:09.961597 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8ceef617_8c1b_4c87_bca9_74b3a78f25fc.slice/crio-f94131c0677e521e383eb14b6403f6c0feaf073de411584cc25f9b459a829998 WatchSource:0}: Error finding container f94131c0677e521e383eb14b6403f6c0feaf073de411584cc25f9b459a829998: Status 404 returned error can't find the container with id f94131c0677e521e383eb14b6403f6c0feaf073de411584cc25f9b459a829998 Feb 16 00:11:09 crc kubenswrapper[5114]: I0216 00:11:09.974095 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:09 crc kubenswrapper[5114]: E0216 00:11:09.974728 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.474709043 +0000 UTC m=+146.855985861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.039920 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.048935 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:11:10 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Feb 16 00:11:10 crc kubenswrapper[5114]: [+]process-running ok Feb 16 00:11:10 crc kubenswrapper[5114]: healthz check failed Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.049006 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.075957 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.076362 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.576316667 +0000 UTC m=+146.957593485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.076606 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.077178 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.577167652 +0000 UTC m=+146.958444470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.178233 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.178828 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.678804087 +0000 UTC m=+147.060080905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.280141 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.284186 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-d8d6z" Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.289989 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.789963188 +0000 UTC m=+147.171240006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.330907 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkj8d" event={"ID":"8ceef617-8c1b-4c87-bca9-74b3a78f25fc","Type":"ContainerStarted","Data":"f94131c0677e521e383eb14b6403f6c0feaf073de411584cc25f9b459a829998"} Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.332229 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w976" event={"ID":"35d79a09-4a13-4f64-b2ef-f7061b82f1f9","Type":"ContainerStarted","Data":"7e93191f9d6f8833a097f9d20745fdda23848b0bef8896105ba0e82d9fa736d2"} Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.333708 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmwl" event={"ID":"d846f09e-4870-4305-857c-b47bbe247686","Type":"ContainerStarted","Data":"81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f"} Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.333741 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmwl" event={"ID":"d846f09e-4870-4305-857c-b47bbe247686","Type":"ContainerStarted","Data":"140f8ba1f8fbef4aaba6d1dbbcd0e746a4eeaa7fe7a598e72f5681fd1e263a1c"} Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.335777 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74kpp" event={"ID":"0d296a72-b033-40b3-8652-128687b79c8e","Type":"ContainerStarted","Data":"49c6fc638881f276c49080b60c0a211d47ea71819f40efceac96c7582a614544"} Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.338870 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zffmj" event={"ID":"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9","Type":"ContainerStarted","Data":"6a1b0091cae4d10b22779d76a4d49341a68a3451e8e770a9898232ad72394c16"} Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.381729 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.382519 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.882499979 +0000 UTC m=+147.263776797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.483897 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.486122 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:10.985765842 +0000 UTC m=+147.367042660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.577851 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.585292 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.585644 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.085617164 +0000 UTC m=+147.466893982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.647537 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.648517 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c61cbc92-a845-41fd-915f-daa5eb2e7344" containerName="pruner" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.648547 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="c61cbc92-a845-41fd-915f-daa5eb2e7344" containerName="pruner" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.648801 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="c61cbc92-a845-41fd-915f-daa5eb2e7344" containerName="pruner" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.677496 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.686768 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c61cbc92-a845-41fd-915f-daa5eb2e7344-kubelet-dir\") pod \"c61cbc92-a845-41fd-915f-daa5eb2e7344\" (UID: \"c61cbc92-a845-41fd-915f-daa5eb2e7344\") " Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.686857 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c61cbc92-a845-41fd-915f-daa5eb2e7344-kube-api-access\") pod \"c61cbc92-a845-41fd-915f-daa5eb2e7344\" (UID: \"c61cbc92-a845-41fd-915f-daa5eb2e7344\") " Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.687167 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.687572 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.187555288 +0000 UTC m=+147.568832106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.687735 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c61cbc92-a845-41fd-915f-daa5eb2e7344-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c61cbc92-a845-41fd-915f-daa5eb2e7344" (UID: "c61cbc92-a845-41fd-915f-daa5eb2e7344"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.697033 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c61cbc92-a845-41fd-915f-daa5eb2e7344-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c61cbc92-a845-41fd-915f-daa5eb2e7344" (UID: "c61cbc92-a845-41fd-915f-daa5eb2e7344"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.788142 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.788190 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25c871eb-063b-4177-b300-f3280f9f7c6a-config-volume\") pod \"25c871eb-063b-4177-b300-f3280f9f7c6a\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.788267 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvvwb\" (UniqueName: \"kubernetes.io/projected/25c871eb-063b-4177-b300-f3280f9f7c6a-kube-api-access-nvvwb\") pod \"25c871eb-063b-4177-b300-f3280f9f7c6a\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.788362 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25c871eb-063b-4177-b300-f3280f9f7c6a-secret-volume\") pod \"25c871eb-063b-4177-b300-f3280f9f7c6a\" (UID: \"25c871eb-063b-4177-b300-f3280f9f7c6a\") " Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.788694 5114 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c61cbc92-a845-41fd-915f-daa5eb2e7344-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.788710 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c61cbc92-a845-41fd-915f-daa5eb2e7344-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.789335 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.289301567 +0000 UTC m=+147.670578395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.789809 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25c871eb-063b-4177-b300-f3280f9f7c6a-config-volume" (OuterVolumeSpecName: "config-volume") pod "25c871eb-063b-4177-b300-f3280f9f7c6a" (UID: "25c871eb-063b-4177-b300-f3280f9f7c6a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.813143 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25c871eb-063b-4177-b300-f3280f9f7c6a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "25c871eb-063b-4177-b300-f3280f9f7c6a" (UID: "25c871eb-063b-4177-b300-f3280f9f7c6a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.815772 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25c871eb-063b-4177-b300-f3280f9f7c6a-kube-api-access-nvvwb" (OuterVolumeSpecName: "kube-api-access-nvvwb") pod "25c871eb-063b-4177-b300-f3280f9f7c6a" (UID: "25c871eb-063b-4177-b300-f3280f9f7c6a"). InnerVolumeSpecName "kube-api-access-nvvwb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.891545 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.892052 5114 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25c871eb-063b-4177-b300-f3280f9f7c6a-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.892076 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nvvwb\" (UniqueName: \"kubernetes.io/projected/25c871eb-063b-4177-b300-f3280f9f7c6a-kube-api-access-nvvwb\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.892096 5114 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/25c871eb-063b-4177-b300-f3280f9f7c6a-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.892473 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.392452195 +0000 UTC m=+147.773729013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.911534 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.911684 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fsm82"] Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.913347 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.919530 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25c871eb-063b-4177-b300-f3280f9f7c6a" containerName="collect-profiles" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.919557 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="25c871eb-063b-4177-b300-f3280f9f7c6a" containerName="collect-profiles" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.919897 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="25c871eb-063b-4177-b300-f3280f9f7c6a" containerName="collect-profiles" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.927782 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.928130 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.935312 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsm82"] Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.936781 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.945430 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.993335 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.993727 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1528149-fdf5-43a5-a3f9-14495b62437d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"a1528149-fdf5-43a5-a3f9-14495b62437d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:10 crc kubenswrapper[5114]: I0216 00:11:10.993970 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1528149-fdf5-43a5-a3f9-14495b62437d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"a1528149-fdf5-43a5-a3f9-14495b62437d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:10 crc kubenswrapper[5114]: E0216 00:11:10.994214 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.494191874 +0000 UTC m=+147.875468682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.044477 5114 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-vdzjf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 16 00:11:11 crc kubenswrapper[5114]: [-]has-synced failed: reason withheld Feb 16 00:11:11 crc kubenswrapper[5114]: [+]process-running ok Feb 16 00:11:11 crc kubenswrapper[5114]: healthz check failed Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.044995 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" podUID="5478df7b-0c00-4c78-9a8e-1bdba1477cde" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.088785 5114 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.096109 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxlls\" (UniqueName: \"kubernetes.io/projected/5ffe7c6f-6349-415c-9729-182b0cc43e93-kube-api-access-dxlls\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.096184 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.096683 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.596662713 +0000 UTC m=+147.977939541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.097200 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-catalog-content\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.097233 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1528149-fdf5-43a5-a3f9-14495b62437d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"a1528149-fdf5-43a5-a3f9-14495b62437d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.097266 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-utilities\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.097288 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1528149-fdf5-43a5-a3f9-14495b62437d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"a1528149-fdf5-43a5-a3f9-14495b62437d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.097396 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1528149-fdf5-43a5-a3f9-14495b62437d-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"a1528149-fdf5-43a5-a3f9-14495b62437d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.118209 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1528149-fdf5-43a5-a3f9-14495b62437d-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"a1528149-fdf5-43a5-a3f9-14495b62437d\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.155417 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nplm7"] Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.166157 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nplm7"] Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.166621 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.197910 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.199721 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-catalog-content\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.199846 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-utilities\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.200011 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dxlls\" (UniqueName: \"kubernetes.io/projected/5ffe7c6f-6349-415c-9729-182b0cc43e93-kube-api-access-dxlls\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.200767 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-catalog-content\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.200925 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.700911654 +0000 UTC m=+148.082188472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.201971 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-utilities\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.218542 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxlls\" (UniqueName: \"kubernetes.io/projected/5ffe7c6f-6349-415c-9729-182b0cc43e93-kube-api-access-dxlls\") pod \"redhat-marketplace-fsm82\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.274380 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.284369 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.302328 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-utilities\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.302378 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.302432 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8zc6\" (UniqueName: \"kubernetes.io/projected/05de580f-e9d2-4045-9403-1fba0034fc3d-kube-api-access-s8zc6\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.302469 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-catalog-content\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.303071 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.803043713 +0000 UTC m=+148.184320531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.361466 5114 generic.go:358] "Generic (PLEG): container finished" podID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerID="73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589" exitCode=0 Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.361986 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w976" event={"ID":"35d79a09-4a13-4f64-b2ef-f7061b82f1f9","Type":"ContainerDied","Data":"73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589"} Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.363156 5114 generic.go:358] "Generic (PLEG): container finished" podID="d846f09e-4870-4305-857c-b47bbe247686" containerID="81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f" exitCode=0 Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.363359 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmwl" event={"ID":"d846f09e-4870-4305-857c-b47bbe247686","Type":"ContainerDied","Data":"81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f"} Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.384977 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" event={"ID":"25c871eb-063b-4177-b300-f3280f9f7c6a","Type":"ContainerDied","Data":"6911b7256e9edb22084eaeb0151130e5b62b6b489932d69234cb73c964dd31cf"} Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.385014 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520000-fprp5" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.385023 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6911b7256e9edb22084eaeb0151130e5b62b6b489932d69234cb73c964dd31cf" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.404665 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.404966 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.904902065 +0000 UTC m=+148.286178923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.405238 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-utilities\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.405344 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.405466 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8zc6\" (UniqueName: \"kubernetes.io/projected/05de580f-e9d2-4045-9403-1fba0034fc3d-kube-api-access-s8zc6\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.405534 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-catalog-content\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.405769 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-utilities\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.406019 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-catalog-content\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.406119 5114 generic.go:358] "Generic (PLEG): container finished" podID="0d296a72-b033-40b3-8652-128687b79c8e" containerID="d9b96d9a56a035f36f44d26e97c781c44cd2e6b7ffc63e5ec875b3ef4151551c" exitCode=0 Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.406149 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:11.90613041 +0000 UTC m=+148.287407228 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.406398 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74kpp" event={"ID":"0d296a72-b033-40b3-8652-128687b79c8e","Type":"ContainerDied","Data":"d9b96d9a56a035f36f44d26e97c781c44cd2e6b7ffc63e5ec875b3ef4151551c"} Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.416390 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zffmj" event={"ID":"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9","Type":"ContainerStarted","Data":"a7c5618ce924e32c6dbc6b10adfcdfa56cde61754ce390de316cb162ce47ea5a"} Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.433083 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"c61cbc92-a845-41fd-915f-daa5eb2e7344","Type":"ContainerDied","Data":"a3ff4613054459e43418dc0203feebc289f6a97666211afe58693bc5d8bbc94e"} Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.433236 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ff4613054459e43418dc0203feebc289f6a97666211afe58693bc5d8bbc94e" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.433114 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.434490 5114 generic.go:358] "Generic (PLEG): container finished" podID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerID="99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c" exitCode=0 Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.434528 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkj8d" event={"ID":"8ceef617-8c1b-4c87-bca9-74b3a78f25fc","Type":"ContainerDied","Data":"99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c"} Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.434626 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8zc6\" (UniqueName: \"kubernetes.io/projected/05de580f-e9d2-4045-9403-1fba0034fc3d-kube-api-access-s8zc6\") pod \"redhat-marketplace-nplm7\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.453967 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-zffmj" podStartSLOduration=15.453947256 podStartE2EDuration="15.453947256s" podCreationTimestamp="2026-02-16 00:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:11.453599736 +0000 UTC m=+147.834876564" watchObservedRunningTime="2026-02-16 00:11:11.453947256 +0000 UTC m=+147.835224074" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.500561 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.509775 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.510441 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:12.010401772 +0000 UTC m=+148.391678590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.611797 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.612773 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:12.112751618 +0000 UTC m=+148.494028436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.673967 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsm82"] Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.713429 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.713861 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:12.213829857 +0000 UTC m=+148.595106675 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.752877 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.764102 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8ld7d"] Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.782508 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.782994 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ld7d"] Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.788835 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.820924 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.821331 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:12.321315631 +0000 UTC m=+148.702592450 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: W0216 00:11:11.829444 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05de580f_e9d2_4045_9403_1fba0034fc3d.slice/crio-bf94e4081d4c792d1565708c09a1f87c089381a437042bf2e7c7e35d64ef24ec WatchSource:0}: Error finding container bf94e4081d4c792d1565708c09a1f87c089381a437042bf2e7c7e35d64ef24ec: Status 404 returned error can't find the container with id bf94e4081d4c792d1565708c09a1f87c089381a437042bf2e7c7e35d64ef24ec Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.838818 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nplm7"] Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.922697 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.922908 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-02-16 00:11:12.422876024 +0000 UTC m=+148.804152842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.923361 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.923410 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-utilities\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.923519 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-catalog-content\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.923699 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwz5w\" (UniqueName: \"kubernetes.io/projected/a392cbd8-29d4-4a9f-a413-40249fe74474-kube-api-access-jwz5w\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:11 crc kubenswrapper[5114]: E0216 00:11:11.923843 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-02-16 00:11:12.423821002 +0000 UTC m=+148.805097830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kmt8j" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.946524 5114 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-16T00:11:11.088811255Z","UUID":"f671778c-7957-4d6c-8343-874f265f6815","Handler":null,"Name":"","Endpoint":""} Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.950660 5114 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 16 00:11:11 crc kubenswrapper[5114]: I0216 00:11:11.950706 5114 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.025039 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.025262 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-utilities\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.025334 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-catalog-content\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.025663 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jwz5w\" (UniqueName: \"kubernetes.io/projected/a392cbd8-29d4-4a9f-a413-40249fe74474-kube-api-access-jwz5w\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.025736 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-catalog-content\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.025814 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-utilities\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.032312 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.070894 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.071760 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwz5w\" (UniqueName: \"kubernetes.io/projected/a392cbd8-29d4-4a9f-a413-40249fe74474-kube-api-access-jwz5w\") pod \"redhat-operators-8ld7d\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.075446 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-vdzjf" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.113292 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.130724 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.134298 5114 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.134336 5114 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.161415 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8ht4r"] Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.171722 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ht4r"] Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.172507 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.192723 5114 ???:1] "http: TLS handshake error from 192.168.126.11:48294: no serving certificate available for the kubelet" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.198317 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kmt8j\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.231897 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-utilities\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.231987 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-catalog-content\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.232033 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t8ff\" (UniqueName: \"kubernetes.io/projected/b96e138d-614d-45ad-8cf4-2b68b9c05830-kube-api-access-7t8ff\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.334913 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-utilities\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.335006 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-catalog-content\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.335044 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7t8ff\" (UniqueName: \"kubernetes.io/projected/b96e138d-614d-45ad-8cf4-2b68b9c05830-kube-api-access-7t8ff\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.336168 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-utilities\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.336468 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-catalog-content\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.361734 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t8ff\" (UniqueName: \"kubernetes.io/projected/b96e138d-614d-45ad-8cf4-2b68b9c05830-kube-api-access-7t8ff\") pod \"redhat-operators-8ht4r\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.368902 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ld7d"] Feb 16 00:11:12 crc kubenswrapper[5114]: W0216 00:11:12.396153 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda392cbd8_29d4_4a9f_a413_40249fe74474.slice/crio-2e12c279e46449e23b129f41f97eb3f6ce80c49eea6690d7f081c3be9c73e047 WatchSource:0}: Error finding container 2e12c279e46449e23b129f41f97eb3f6ce80c49eea6690d7f081c3be9c73e047: Status 404 returned error can't find the container with id 2e12c279e46449e23b129f41f97eb3f6ce80c49eea6690d7f081c3be9c73e047 Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.443978 5114 generic.go:358] "Generic (PLEG): container finished" podID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerID="4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42" exitCode=0 Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.444103 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nplm7" event={"ID":"05de580f-e9d2-4045-9403-1fba0034fc3d","Type":"ContainerDied","Data":"4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42"} Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.444161 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nplm7" event={"ID":"05de580f-e9d2-4045-9403-1fba0034fc3d","Type":"ContainerStarted","Data":"bf94e4081d4c792d1565708c09a1f87c089381a437042bf2e7c7e35d64ef24ec"} Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.445451 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ld7d" event={"ID":"a392cbd8-29d4-4a9f-a413-40249fe74474","Type":"ContainerStarted","Data":"2e12c279e46449e23b129f41f97eb3f6ce80c49eea6690d7f081c3be9c73e047"} Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.451359 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-zffmj" event={"ID":"98c39729-d4c0-44a4-bf4e-c8c32a2d9bb9","Type":"ContainerStarted","Data":"1440d25dfc40c7946afe82aef9d450e5347817127401a0bb26757931cc0df113"} Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.455566 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"a1528149-fdf5-43a5-a3f9-14495b62437d","Type":"ContainerStarted","Data":"9cdf12073896d9a2d9ce510607ab1d9f82026c7b4619a7df8ebae6724464face"} Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.455613 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"a1528149-fdf5-43a5-a3f9-14495b62437d","Type":"ContainerStarted","Data":"4d122b04c4e1a9a63f88a430b252802cb003ced060a03c72d58bfff24dbe758b"} Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.460138 5114 generic.go:358] "Generic (PLEG): container finished" podID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerID="c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d" exitCode=0 Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.461092 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.461490 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsm82" event={"ID":"5ffe7c6f-6349-415c-9729-182b0cc43e93","Type":"ContainerDied","Data":"c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d"} Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.461577 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsm82" event={"ID":"5ffe7c6f-6349-415c-9729-182b0cc43e93","Type":"ContainerStarted","Data":"f3d643ec5655fa16d4b95c51ad5e4e51cb9e3ba8a4b7dafe36685a3e0001c425"} Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.497632 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=2.4976110289999998 podStartE2EDuration="2.497611029s" podCreationTimestamp="2026-02-16 00:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:12.493565181 +0000 UTC m=+148.874841999" watchObservedRunningTime="2026-02-16 00:11:12.497611029 +0000 UTC m=+148.878887847" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.540837 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.580686 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-x9wkk container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.580772 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-x9wkk" podUID="f47442a6-b454-45d5-8094-794e063f573d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.680626 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmt8j"] Feb 16 00:11:12 crc kubenswrapper[5114]: W0216 00:11:12.697316 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod747ba08a_df9e_422d_be4e_f2ababc30dea.slice/crio-a456cdef92bc4ed9155c3320e55bc1a4541f695ad91394cf65b898160f990b3b WatchSource:0}: Error finding container a456cdef92bc4ed9155c3320e55bc1a4541f695ad91394cf65b898160f990b3b: Status 404 returned error can't find the container with id a456cdef92bc4ed9155c3320e55bc1a4541f695ad91394cf65b898160f990b3b Feb 16 00:11:12 crc kubenswrapper[5114]: I0216 00:11:12.753724 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8ht4r"] Feb 16 00:11:12 crc kubenswrapper[5114]: W0216 00:11:12.762308 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb96e138d_614d_45ad_8cf4_2b68b9c05830.slice/crio-19a51b1011d2ec6a9e66dd2ad1138ed09274fae6b76ae110bafc28d5d4980107 WatchSource:0}: Error finding container 19a51b1011d2ec6a9e66dd2ad1138ed09274fae6b76ae110bafc28d5d4980107: Status 404 returned error can't find the container with id 19a51b1011d2ec6a9e66dd2ad1138ed09274fae6b76ae110bafc28d5d4980107 Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.276592 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-h8c98" Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.483726 5114 generic.go:358] "Generic (PLEG): container finished" podID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerID="0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9" exitCode=0 Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.483838 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ld7d" event={"ID":"a392cbd8-29d4-4a9f-a413-40249fe74474","Type":"ContainerDied","Data":"0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9"} Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.486487 5114 generic.go:358] "Generic (PLEG): container finished" podID="a1528149-fdf5-43a5-a3f9-14495b62437d" containerID="9cdf12073896d9a2d9ce510607ab1d9f82026c7b4619a7df8ebae6724464face" exitCode=0 Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.486731 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"a1528149-fdf5-43a5-a3f9-14495b62437d","Type":"ContainerDied","Data":"9cdf12073896d9a2d9ce510607ab1d9f82026c7b4619a7df8ebae6724464face"} Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.490307 5114 generic.go:358] "Generic (PLEG): container finished" podID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerID="e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0" exitCode=0 Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.490409 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ht4r" event={"ID":"b96e138d-614d-45ad-8cf4-2b68b9c05830","Type":"ContainerDied","Data":"e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0"} Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.490479 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ht4r" event={"ID":"b96e138d-614d-45ad-8cf4-2b68b9c05830","Type":"ContainerStarted","Data":"19a51b1011d2ec6a9e66dd2ad1138ed09274fae6b76ae110bafc28d5d4980107"} Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.496894 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" event={"ID":"747ba08a-df9e-422d-be4e-f2ababc30dea","Type":"ContainerStarted","Data":"90d8a2a069abbd568392f18ee3971e6e788cfadda8bbbc654fe454a8696aed67"} Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.496997 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.497012 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" event={"ID":"747ba08a-df9e-422d-be4e-f2ababc30dea","Type":"ContainerStarted","Data":"a456cdef92bc4ed9155c3320e55bc1a4541f695ad91394cf65b898160f990b3b"} Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.574715 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.581198 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" podStartSLOduration=126.581176928 podStartE2EDuration="2m6.581176928s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:13.550437097 +0000 UTC m=+149.931713935" watchObservedRunningTime="2026-02-16 00:11:13.581176928 +0000 UTC m=+149.962453766" Feb 16 00:11:13 crc kubenswrapper[5114]: E0216 00:11:13.718745 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:13 crc kubenswrapper[5114]: E0216 00:11:13.729455 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:13 crc kubenswrapper[5114]: E0216 00:11:13.742430 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:13 crc kubenswrapper[5114]: E0216 00:11:13.742526 5114 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" podUID="8d81cb10-abbd-4c04-9632-446be1e89c2b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 16 00:11:13 crc kubenswrapper[5114]: I0216 00:11:13.826751 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Feb 16 00:11:14 crc kubenswrapper[5114]: I0216 00:11:14.401696 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-sl2nf" Feb 16 00:11:14 crc kubenswrapper[5114]: I0216 00:11:14.752853 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:14 crc kubenswrapper[5114]: I0216 00:11:14.798029 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1528149-fdf5-43a5-a3f9-14495b62437d-kubelet-dir\") pod \"a1528149-fdf5-43a5-a3f9-14495b62437d\" (UID: \"a1528149-fdf5-43a5-a3f9-14495b62437d\") " Feb 16 00:11:14 crc kubenswrapper[5114]: I0216 00:11:14.798178 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1528149-fdf5-43a5-a3f9-14495b62437d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a1528149-fdf5-43a5-a3f9-14495b62437d" (UID: "a1528149-fdf5-43a5-a3f9-14495b62437d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:11:14 crc kubenswrapper[5114]: I0216 00:11:14.798478 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1528149-fdf5-43a5-a3f9-14495b62437d-kube-api-access\") pod \"a1528149-fdf5-43a5-a3f9-14495b62437d\" (UID: \"a1528149-fdf5-43a5-a3f9-14495b62437d\") " Feb 16 00:11:14 crc kubenswrapper[5114]: I0216 00:11:14.799297 5114 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a1528149-fdf5-43a5-a3f9-14495b62437d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:14 crc kubenswrapper[5114]: I0216 00:11:14.821790 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1528149-fdf5-43a5-a3f9-14495b62437d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a1528149-fdf5-43a5-a3f9-14495b62437d" (UID: "a1528149-fdf5-43a5-a3f9-14495b62437d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:14 crc kubenswrapper[5114]: I0216 00:11:14.910999 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a1528149-fdf5-43a5-a3f9-14495b62437d-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:15 crc kubenswrapper[5114]: I0216 00:11:15.514759 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"a1528149-fdf5-43a5-a3f9-14495b62437d","Type":"ContainerDied","Data":"4d122b04c4e1a9a63f88a430b252802cb003ced060a03c72d58bfff24dbe758b"} Feb 16 00:11:15 crc kubenswrapper[5114]: I0216 00:11:15.514822 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d122b04c4e1a9a63f88a430b252802cb003ced060a03c72d58bfff24dbe758b" Feb 16 00:11:15 crc kubenswrapper[5114]: I0216 00:11:15.514977 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Feb 16 00:11:16 crc kubenswrapper[5114]: I0216 00:11:16.246734 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-rswb4" Feb 16 00:11:16 crc kubenswrapper[5114]: I0216 00:11:16.255870 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-nvr4r" Feb 16 00:11:17 crc kubenswrapper[5114]: I0216 00:11:17.279827 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:11:18 crc kubenswrapper[5114]: I0216 00:11:18.544753 5114 patch_prober.go:28] interesting pod/downloads-747b44746d-x9wkk container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" start-of-body= Feb 16 00:11:18 crc kubenswrapper[5114]: I0216 00:11:18.544829 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-x9wkk" podUID="f47442a6-b454-45d5-8094-794e063f573d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.9:8080/\": dial tcp 10.217.0.9:8080: connect: connection refused" Feb 16 00:11:19 crc kubenswrapper[5114]: I0216 00:11:19.574018 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:11:19 crc kubenswrapper[5114]: I0216 00:11:19.579554 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-l8qvm" Feb 16 00:11:22 crc kubenswrapper[5114]: I0216 00:11:22.467616 5114 ???:1] "http: TLS handshake error from 192.168.126.11:57238: no serving certificate available for the kubelet" Feb 16 00:11:22 crc kubenswrapper[5114]: I0216 00:11:22.586596 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-x9wkk" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.574419 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.574499 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.574555 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.580970 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.582585 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.664354 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.674426 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.676534 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.676622 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.681534 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6149fdd-e85e-41f7-b50a-76f70c153c44-metrics-certs\") pod \"network-metrics-daemon-vk5fl\" (UID: \"d6149fdd-e85e-41f7-b50a-76f70c153c44\") " pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.681990 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:11:23 crc kubenswrapper[5114]: E0216 00:11:23.706144 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:23 crc kubenswrapper[5114]: E0216 00:11:23.708330 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:23 crc kubenswrapper[5114]: E0216 00:11:23.710515 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:23 crc kubenswrapper[5114]: E0216 00:11:23.710607 5114 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" podUID="8d81cb10-abbd-4c04-9632-446be1e89c2b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.740778 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.754347 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-vk5fl" Feb 16 00:11:23 crc kubenswrapper[5114]: I0216 00:11:23.951180 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.018272 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-vk5fl"] Feb 16 00:11:26 crc kubenswrapper[5114]: W0216 00:11:26.058458 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6149fdd_e85e_41f7_b50a_76f70c153c44.slice/crio-8dffdd03b246abdb1fcdb7830d2d8e1cdd292f1624c513e184d43cf67672ad30 WatchSource:0}: Error finding container 8dffdd03b246abdb1fcdb7830d2d8e1cdd292f1624c513e184d43cf67672ad30: Status 404 returned error can't find the container with id 8dffdd03b246abdb1fcdb7830d2d8e1cdd292f1624c513e184d43cf67672ad30 Feb 16 00:11:26 crc kubenswrapper[5114]: W0216 00:11:26.061856 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a9ae5f6_97bd_46ac_bafa_ca1b4452a141.slice/crio-8f3ae63c74952e3c65e10098e87695aea8ed70f7873d20fc51513f992d2a1200 WatchSource:0}: Error finding container 8f3ae63c74952e3c65e10098e87695aea8ed70f7873d20fc51513f992d2a1200: Status 404 returned error can't find the container with id 8f3ae63c74952e3c65e10098e87695aea8ed70f7873d20fc51513f992d2a1200 Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.644831 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"5bb96d29b381751886adc8300c377c1b8c30f7791e2c088d60b68142573328ad"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.647327 5114 generic.go:358] "Generic (PLEG): container finished" podID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerID="66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a" exitCode=0 Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.647564 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkj8d" event={"ID":"8ceef617-8c1b-4c87-bca9-74b3a78f25fc","Type":"ContainerDied","Data":"66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.655860 5114 generic.go:358] "Generic (PLEG): container finished" podID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerID="2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c" exitCode=0 Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.655967 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w976" event={"ID":"35d79a09-4a13-4f64-b2ef-f7061b82f1f9","Type":"ContainerDied","Data":"2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.669179 5114 generic.go:358] "Generic (PLEG): container finished" podID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerID="2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a" exitCode=0 Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.669265 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsm82" event={"ID":"5ffe7c6f-6349-415c-9729-182b0cc43e93","Type":"ContainerDied","Data":"2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.682489 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vk5fl" event={"ID":"d6149fdd-e85e-41f7-b50a-76f70c153c44","Type":"ContainerStarted","Data":"8dffdd03b246abdb1fcdb7830d2d8e1cdd292f1624c513e184d43cf67672ad30"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.687209 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ht4r" event={"ID":"b96e138d-614d-45ad-8cf4-2b68b9c05830","Type":"ContainerStarted","Data":"d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.689319 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"1f134aed2f23d84c084f5186f20897b327f6e7100228c129f5c30245ce95730f"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.690662 5114 generic.go:358] "Generic (PLEG): container finished" podID="d846f09e-4870-4305-857c-b47bbe247686" containerID="745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d" exitCode=0 Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.690759 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmwl" event={"ID":"d846f09e-4870-4305-857c-b47bbe247686","Type":"ContainerDied","Data":"745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.697038 5114 generic.go:358] "Generic (PLEG): container finished" podID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerID="bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802" exitCode=0 Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.697304 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nplm7" event={"ID":"05de580f-e9d2-4045-9403-1fba0034fc3d","Type":"ContainerDied","Data":"bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.704131 5114 generic.go:358] "Generic (PLEG): container finished" podID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerID="fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a" exitCode=0 Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.704469 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ld7d" event={"ID":"a392cbd8-29d4-4a9f-a413-40249fe74474","Type":"ContainerDied","Data":"fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.711008 5114 generic.go:358] "Generic (PLEG): container finished" podID="0d296a72-b033-40b3-8652-128687b79c8e" containerID="d9b307e43ba65eedc32453a44d162a9c8e6b5d10ad7f094d1ac32b121320e962" exitCode=0 Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.711168 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74kpp" event={"ID":"0d296a72-b033-40b3-8652-128687b79c8e","Type":"ContainerDied","Data":"d9b307e43ba65eedc32453a44d162a9c8e6b5d10ad7f094d1ac32b121320e962"} Feb 16 00:11:26 crc kubenswrapper[5114]: I0216 00:11:26.712993 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"8f3ae63c74952e3c65e10098e87695aea8ed70f7873d20fc51513f992d2a1200"} Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.728207 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"212a05100f8cab622900db130313883c157436072b0c31ed8e5daf92423258e5"} Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.731066 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkj8d" event={"ID":"8ceef617-8c1b-4c87-bca9-74b3a78f25fc","Type":"ContainerStarted","Data":"364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c"} Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.735481 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w976" event={"ID":"35d79a09-4a13-4f64-b2ef-f7061b82f1f9","Type":"ContainerStarted","Data":"c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e"} Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.742261 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vk5fl" event={"ID":"d6149fdd-e85e-41f7-b50a-76f70c153c44","Type":"ContainerStarted","Data":"1c015baa367c0e25af752de32f2b7f617be1a55f39683175bd151981fccc5023"} Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.744635 5114 generic.go:358] "Generic (PLEG): container finished" podID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerID="d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad" exitCode=0 Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.744675 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ht4r" event={"ID":"b96e138d-614d-45ad-8cf4-2b68b9c05830","Type":"ContainerDied","Data":"d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad"} Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.749051 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"7d8bd4e95fe4604981cea0cd9758e7217fba855959429a8f8046b92a4ca80a36"} Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.751586 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"6baaf26901f73d30a8c76ea031be89153722ef37664959c96191e52d0e51cae5"} Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.816708 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xkj8d" podStartSLOduration=4.819664958 podStartE2EDuration="18.816687641s" podCreationTimestamp="2026-02-16 00:11:09 +0000 UTC" firstStartedPulling="2026-02-16 00:11:11.435392558 +0000 UTC m=+147.816669376" lastFinishedPulling="2026-02-16 00:11:25.432415201 +0000 UTC m=+161.813692059" observedRunningTime="2026-02-16 00:11:27.813301513 +0000 UTC m=+164.194578371" watchObservedRunningTime="2026-02-16 00:11:27.816687641 +0000 UTC m=+164.197964459" Feb 16 00:11:27 crc kubenswrapper[5114]: I0216 00:11:27.932153 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.762287 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmwl" event={"ID":"d846f09e-4870-4305-857c-b47bbe247686","Type":"ContainerStarted","Data":"ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a"} Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.765289 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nplm7" event={"ID":"05de580f-e9d2-4045-9403-1fba0034fc3d","Type":"ContainerStarted","Data":"ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312"} Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.767434 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ld7d" event={"ID":"a392cbd8-29d4-4a9f-a413-40249fe74474","Type":"ContainerStarted","Data":"015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018"} Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.770055 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74kpp" event={"ID":"0d296a72-b033-40b3-8652-128687b79c8e","Type":"ContainerStarted","Data":"53b7874c843a62a9db7449dbc9bb4258967cdeb2def438895832f02fe961bb73"} Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.773595 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsm82" event={"ID":"5ffe7c6f-6349-415c-9729-182b0cc43e93","Type":"ContainerStarted","Data":"591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3"} Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.789337 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-llmwl" podStartSLOduration=6.693455548 podStartE2EDuration="20.789319355s" podCreationTimestamp="2026-02-16 00:11:08 +0000 UTC" firstStartedPulling="2026-02-16 00:11:11.363772423 +0000 UTC m=+147.745049241" lastFinishedPulling="2026-02-16 00:11:25.45963623 +0000 UTC m=+161.840913048" observedRunningTime="2026-02-16 00:11:28.787467441 +0000 UTC m=+165.168744259" watchObservedRunningTime="2026-02-16 00:11:28.789319355 +0000 UTC m=+165.170596173" Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.809453 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8ld7d" podStartSLOduration=5.741477754 podStartE2EDuration="17.809439038s" podCreationTimestamp="2026-02-16 00:11:11 +0000 UTC" firstStartedPulling="2026-02-16 00:11:13.48497205 +0000 UTC m=+149.866248858" lastFinishedPulling="2026-02-16 00:11:25.552933324 +0000 UTC m=+161.934210142" observedRunningTime="2026-02-16 00:11:28.806339308 +0000 UTC m=+165.187616126" watchObservedRunningTime="2026-02-16 00:11:28.809439038 +0000 UTC m=+165.190715866" Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.847716 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9w976" podStartSLOduration=6.737458582 podStartE2EDuration="20.847687866s" podCreationTimestamp="2026-02-16 00:11:08 +0000 UTC" firstStartedPulling="2026-02-16 00:11:11.362714182 +0000 UTC m=+147.743990990" lastFinishedPulling="2026-02-16 00:11:25.472943456 +0000 UTC m=+161.854220274" observedRunningTime="2026-02-16 00:11:28.846958715 +0000 UTC m=+165.228235533" watchObservedRunningTime="2026-02-16 00:11:28.847687866 +0000 UTC m=+165.228964724" Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.848308 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fsm82" podStartSLOduration=5.998320549 podStartE2EDuration="18.848293644s" podCreationTimestamp="2026-02-16 00:11:10 +0000 UTC" firstStartedPulling="2026-02-16 00:11:12.462354317 +0000 UTC m=+148.843631135" lastFinishedPulling="2026-02-16 00:11:25.312327372 +0000 UTC m=+161.693604230" observedRunningTime="2026-02-16 00:11:28.827654105 +0000 UTC m=+165.208930923" watchObservedRunningTime="2026-02-16 00:11:28.848293644 +0000 UTC m=+165.229570522" Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.900109 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nplm7" podStartSLOduration=5.035565849 podStartE2EDuration="17.900092995s" podCreationTimestamp="2026-02-16 00:11:11 +0000 UTC" firstStartedPulling="2026-02-16 00:11:12.446260411 +0000 UTC m=+148.827537229" lastFinishedPulling="2026-02-16 00:11:25.310787517 +0000 UTC m=+161.692064375" observedRunningTime="2026-02-16 00:11:28.899186198 +0000 UTC m=+165.280463036" watchObservedRunningTime="2026-02-16 00:11:28.900092995 +0000 UTC m=+165.281369813" Feb 16 00:11:28 crc kubenswrapper[5114]: I0216 00:11:28.901480 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-74kpp" podStartSLOduration=6.8667457689999996 podStartE2EDuration="20.901474835s" podCreationTimestamp="2026-02-16 00:11:08 +0000 UTC" firstStartedPulling="2026-02-16 00:11:11.407567512 +0000 UTC m=+147.788844330" lastFinishedPulling="2026-02-16 00:11:25.442296578 +0000 UTC m=+161.823573396" observedRunningTime="2026-02-16 00:11:28.879356534 +0000 UTC m=+165.260633402" watchObservedRunningTime="2026-02-16 00:11:28.901474835 +0000 UTC m=+165.282751653" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.038082 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.038674 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.089572 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.089712 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.329150 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.329214 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.648957 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.649035 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.806173 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ht4r" event={"ID":"b96e138d-614d-45ad-8cf4-2b68b9c05830","Type":"ContainerStarted","Data":"afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed"} Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.811291 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-vk5fl" event={"ID":"d6149fdd-e85e-41f7-b50a-76f70c153c44","Type":"ContainerStarted","Data":"0488255f035af7dcec716c915db7e7c10332f60fbe7f50ce9eac2c99d56eafa9"} Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.836949 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8ht4r" podStartSLOduration=5.852782696 podStartE2EDuration="17.836921851s" podCreationTimestamp="2026-02-16 00:11:12 +0000 UTC" firstStartedPulling="2026-02-16 00:11:13.491807848 +0000 UTC m=+149.873084666" lastFinishedPulling="2026-02-16 00:11:25.475947003 +0000 UTC m=+161.857223821" observedRunningTime="2026-02-16 00:11:29.835407848 +0000 UTC m=+166.216684696" watchObservedRunningTime="2026-02-16 00:11:29.836921851 +0000 UTC m=+166.218198679" Feb 16 00:11:29 crc kubenswrapper[5114]: I0216 00:11:29.874777 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-vk5fl" podStartSLOduration=142.874739217 podStartE2EDuration="2m22.874739217s" podCreationTimestamp="2026-02-16 00:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:29.870388951 +0000 UTC m=+166.251665819" watchObservedRunningTime="2026-02-16 00:11:29.874739217 +0000 UTC m=+166.256016075" Feb 16 00:11:30 crc kubenswrapper[5114]: I0216 00:11:30.582812 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-74kpp" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="registry-server" probeResult="failure" output=< Feb 16 00:11:30 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:11:30 crc kubenswrapper[5114]: > Feb 16 00:11:30 crc kubenswrapper[5114]: I0216 00:11:30.588461 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9w976" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="registry-server" probeResult="failure" output=< Feb 16 00:11:30 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:11:30 crc kubenswrapper[5114]: > Feb 16 00:11:30 crc kubenswrapper[5114]: I0216 00:11:30.589049 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-llmwl" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="registry-server" probeResult="failure" output=< Feb 16 00:11:30 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:11:30 crc kubenswrapper[5114]: > Feb 16 00:11:30 crc kubenswrapper[5114]: I0216 00:11:30.712737 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-xkj8d" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="registry-server" probeResult="failure" output=< Feb 16 00:11:30 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:11:30 crc kubenswrapper[5114]: > Feb 16 00:11:31 crc kubenswrapper[5114]: I0216 00:11:31.285937 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:31 crc kubenswrapper[5114]: I0216 00:11:31.286000 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:31 crc kubenswrapper[5114]: I0216 00:11:31.501739 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:31 crc kubenswrapper[5114]: I0216 00:11:31.501845 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:32 crc kubenswrapper[5114]: I0216 00:11:32.113708 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:32 crc kubenswrapper[5114]: I0216 00:11:32.113779 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:32 crc kubenswrapper[5114]: I0216 00:11:32.346349 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fsm82" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="registry-server" probeResult="failure" output=< Feb 16 00:11:32 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:11:32 crc kubenswrapper[5114]: > Feb 16 00:11:32 crc kubenswrapper[5114]: I0216 00:11:32.541871 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:32 crc kubenswrapper[5114]: I0216 00:11:32.541915 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:32 crc kubenswrapper[5114]: I0216 00:11:32.547774 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nplm7" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="registry-server" probeResult="failure" output=< Feb 16 00:11:32 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:11:32 crc kubenswrapper[5114]: > Feb 16 00:11:33 crc kubenswrapper[5114]: I0216 00:11:33.157989 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8ld7d" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="registry-server" probeResult="failure" output=< Feb 16 00:11:33 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:11:33 crc kubenswrapper[5114]: > Feb 16 00:11:33 crc kubenswrapper[5114]: I0216 00:11:33.584559 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8ht4r" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="registry-server" probeResult="failure" output=< Feb 16 00:11:33 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:11:33 crc kubenswrapper[5114]: > Feb 16 00:11:33 crc kubenswrapper[5114]: E0216 00:11:33.707825 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:33 crc kubenswrapper[5114]: E0216 00:11:33.710546 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:33 crc kubenswrapper[5114]: E0216 00:11:33.712438 5114 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" cmd=["/bin/bash","-c","test -f /ready/ready"] Feb 16 00:11:33 crc kubenswrapper[5114]: E0216 00:11:33.712520 5114 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" podUID="8d81cb10-abbd-4c04-9632-446be1e89c2b" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Feb 16 00:11:34 crc kubenswrapper[5114]: I0216 00:11:34.514034 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.419281 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-n7nf8_8d81cb10-abbd-4c04-9632-446be1e89c2b/kube-multus-additional-cni-plugins/0.log" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.419865 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.483678 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8d81cb10-abbd-4c04-9632-446be1e89c2b-tuning-conf-dir\") pod \"8d81cb10-abbd-4c04-9632-446be1e89c2b\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.483843 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8d81cb10-abbd-4c04-9632-446be1e89c2b-cni-sysctl-allowlist\") pod \"8d81cb10-abbd-4c04-9632-446be1e89c2b\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.483869 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d81cb10-abbd-4c04-9632-446be1e89c2b-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "8d81cb10-abbd-4c04-9632-446be1e89c2b" (UID: "8d81cb10-abbd-4c04-9632-446be1e89c2b"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.484151 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8d81cb10-abbd-4c04-9632-446be1e89c2b-ready\") pod \"8d81cb10-abbd-4c04-9632-446be1e89c2b\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.484285 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhdtn\" (UniqueName: \"kubernetes.io/projected/8d81cb10-abbd-4c04-9632-446be1e89c2b-kube-api-access-nhdtn\") pod \"8d81cb10-abbd-4c04-9632-446be1e89c2b\" (UID: \"8d81cb10-abbd-4c04-9632-446be1e89c2b\") " Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.484760 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d81cb10-abbd-4c04-9632-446be1e89c2b-ready" (OuterVolumeSpecName: "ready") pod "8d81cb10-abbd-4c04-9632-446be1e89c2b" (UID: "8d81cb10-abbd-4c04-9632-446be1e89c2b"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.484918 5114 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/8d81cb10-abbd-4c04-9632-446be1e89c2b-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.486157 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d81cb10-abbd-4c04-9632-446be1e89c2b-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "8d81cb10-abbd-4c04-9632-446be1e89c2b" (UID: "8d81cb10-abbd-4c04-9632-446be1e89c2b"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.495627 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d81cb10-abbd-4c04-9632-446be1e89c2b-kube-api-access-nhdtn" (OuterVolumeSpecName: "kube-api-access-nhdtn") pod "8d81cb10-abbd-4c04-9632-446be1e89c2b" (UID: "8d81cb10-abbd-4c04-9632-446be1e89c2b"). InnerVolumeSpecName "kube-api-access-nhdtn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.586092 5114 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/8d81cb10-abbd-4c04-9632-446be1e89c2b-ready\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.586158 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nhdtn\" (UniqueName: \"kubernetes.io/projected/8d81cb10-abbd-4c04-9632-446be1e89c2b-kube-api-access-nhdtn\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.586175 5114 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/8d81cb10-abbd-4c04-9632-446be1e89c2b-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.854024 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-n7nf8_8d81cb10-abbd-4c04-9632-446be1e89c2b/kube-multus-additional-cni-plugins/0.log" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.854728 5114 generic.go:358] "Generic (PLEG): container finished" podID="8d81cb10-abbd-4c04-9632-446be1e89c2b" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" exitCode=137 Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.854864 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.854881 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" event={"ID":"8d81cb10-abbd-4c04-9632-446be1e89c2b","Type":"ContainerDied","Data":"fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609"} Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.854955 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-n7nf8" event={"ID":"8d81cb10-abbd-4c04-9632-446be1e89c2b","Type":"ContainerDied","Data":"815a9e1b5d1248674a5d37c6683c25ee9f6552d837d8a4abfb2c9d2098cb7f27"} Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.854986 5114 scope.go:117] "RemoveContainer" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.886325 5114 scope.go:117] "RemoveContainer" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.887313 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-n7nf8"] Feb 16 00:11:36 crc kubenswrapper[5114]: E0216 00:11:36.887334 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609\": container with ID starting with fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609 not found: ID does not exist" containerID="fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.887389 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609"} err="failed to get container status \"fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609\": rpc error: code = NotFound desc = could not find container \"fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609\": container with ID starting with fbfa32fb4e2da512ecec09434b8f6bdef647f901e90b14214138babb061cf609 not found: ID does not exist" Feb 16 00:11:36 crc kubenswrapper[5114]: I0216 00:11:36.890537 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-n7nf8"] Feb 16 00:11:37 crc kubenswrapper[5114]: I0216 00:11:37.283757 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-47pxl" Feb 16 00:11:37 crc kubenswrapper[5114]: I0216 00:11:37.835154 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d81cb10-abbd-4c04-9632-446be1e89c2b" path="/var/lib/kubelet/pods/8d81cb10-abbd-4c04-9632-446be1e89c2b/volumes" Feb 16 00:11:37 crc kubenswrapper[5114]: I0216 00:11:37.863027 5114 generic.go:358] "Generic (PLEG): container finished" podID="bdc47cbe-a3d3-432a-b8bb-399a35be1822" containerID="60f2c0dea61e85edb3e8e336d4ec8987f3e5bb7b7a5f7650201e82dd556534a0" exitCode=0 Feb 16 00:11:37 crc kubenswrapper[5114]: I0216 00:11:37.863192 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29520000-tmdgt" event={"ID":"bdc47cbe-a3d3-432a-b8bb-399a35be1822","Type":"ContainerDied","Data":"60f2c0dea61e85edb3e8e336d4ec8987f3e5bb7b7a5f7650201e82dd556534a0"} Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.099401 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.138408 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.150215 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.194444 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.275725 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.331466 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwwlf\" (UniqueName: \"kubernetes.io/projected/bdc47cbe-a3d3-432a-b8bb-399a35be1822-kube-api-access-vwwlf\") pod \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\" (UID: \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\") " Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.331520 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdc47cbe-a3d3-432a-b8bb-399a35be1822-serviceca\") pod \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\" (UID: \"bdc47cbe-a3d3-432a-b8bb-399a35be1822\") " Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.332445 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc47cbe-a3d3-432a-b8bb-399a35be1822-serviceca" (OuterVolumeSpecName: "serviceca") pod "bdc47cbe-a3d3-432a-b8bb-399a35be1822" (UID: "bdc47cbe-a3d3-432a-b8bb-399a35be1822"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.342199 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc47cbe-a3d3-432a-b8bb-399a35be1822-kube-api-access-vwwlf" (OuterVolumeSpecName: "kube-api-access-vwwlf") pod "bdc47cbe-a3d3-432a-b8bb-399a35be1822" (UID: "bdc47cbe-a3d3-432a-b8bb-399a35be1822"). InnerVolumeSpecName "kube-api-access-vwwlf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.376678 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.432897 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vwwlf\" (UniqueName: \"kubernetes.io/projected/bdc47cbe-a3d3-432a-b8bb-399a35be1822-kube-api-access-vwwlf\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.433324 5114 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/bdc47cbe-a3d3-432a-b8bb-399a35be1822-serviceca\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.434233 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.699724 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.742627 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.890639 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29520000-tmdgt" Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.891286 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29520000-tmdgt" event={"ID":"bdc47cbe-a3d3-432a-b8bb-399a35be1822","Type":"ContainerDied","Data":"582cafe72e294e516111c1c8151f070f1c66724e4b7952bc46ffa3937586a314"} Feb 16 00:11:39 crc kubenswrapper[5114]: I0216 00:11:39.891317 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="582cafe72e294e516111c1c8151f070f1c66724e4b7952bc46ffa3937586a314" Feb 16 00:11:40 crc kubenswrapper[5114]: I0216 00:11:40.976384 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkj8d"] Feb 16 00:11:40 crc kubenswrapper[5114]: I0216 00:11:40.977754 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xkj8d" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="registry-server" containerID="cri-o://364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c" gracePeriod=2 Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.333831 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.388791 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.442788 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.549899 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.559821 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-74kpp"] Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.560103 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-74kpp" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="registry-server" containerID="cri-o://53b7874c843a62a9db7449dbc9bb4258967cdeb2def438895832f02fe961bb73" gracePeriod=2 Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.572901 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-catalog-content\") pod \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.573042 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p22gz\" (UniqueName: \"kubernetes.io/projected/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-kube-api-access-p22gz\") pod \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.573166 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-utilities\") pod \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\" (UID: \"8ceef617-8c1b-4c87-bca9-74b3a78f25fc\") " Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.574301 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-utilities" (OuterVolumeSpecName: "utilities") pod "8ceef617-8c1b-4c87-bca9-74b3a78f25fc" (UID: "8ceef617-8c1b-4c87-bca9-74b3a78f25fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.584488 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-kube-api-access-p22gz" (OuterVolumeSpecName: "kube-api-access-p22gz") pod "8ceef617-8c1b-4c87-bca9-74b3a78f25fc" (UID: "8ceef617-8c1b-4c87-bca9-74b3a78f25fc"). InnerVolumeSpecName "kube-api-access-p22gz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.604197 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.619415 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ceef617-8c1b-4c87-bca9-74b3a78f25fc" (UID: "8ceef617-8c1b-4c87-bca9-74b3a78f25fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.675468 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.676126 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.676199 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p22gz\" (UniqueName: \"kubernetes.io/projected/8ceef617-8c1b-4c87-bca9-74b3a78f25fc-kube-api-access-p22gz\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.905104 5114 generic.go:358] "Generic (PLEG): container finished" podID="0d296a72-b033-40b3-8652-128687b79c8e" containerID="53b7874c843a62a9db7449dbc9bb4258967cdeb2def438895832f02fe961bb73" exitCode=0 Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.905404 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74kpp" event={"ID":"0d296a72-b033-40b3-8652-128687b79c8e","Type":"ContainerDied","Data":"53b7874c843a62a9db7449dbc9bb4258967cdeb2def438895832f02fe961bb73"} Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.907947 5114 generic.go:358] "Generic (PLEG): container finished" podID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerID="364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c" exitCode=0 Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.909051 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkj8d" event={"ID":"8ceef617-8c1b-4c87-bca9-74b3a78f25fc","Type":"ContainerDied","Data":"364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c"} Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.909080 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xkj8d" event={"ID":"8ceef617-8c1b-4c87-bca9-74b3a78f25fc","Type":"ContainerDied","Data":"f94131c0677e521e383eb14b6403f6c0feaf073de411584cc25f9b459a829998"} Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.909140 5114 scope.go:117] "RemoveContainer" containerID="364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.909513 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xkj8d" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.933937 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xkj8d"] Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.936324 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xkj8d"] Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.941021 5114 scope.go:117] "RemoveContainer" containerID="66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.961535 5114 scope.go:117] "RemoveContainer" containerID="99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.986440 5114 scope.go:117] "RemoveContainer" containerID="364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c" Feb 16 00:11:41 crc kubenswrapper[5114]: E0216 00:11:41.987372 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c\": container with ID starting with 364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c not found: ID does not exist" containerID="364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.987417 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c"} err="failed to get container status \"364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c\": rpc error: code = NotFound desc = could not find container \"364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c\": container with ID starting with 364401070a2ccab847117c97a4e08739ef2be45b7ba772fa7bc462daf38d5e7c not found: ID does not exist" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.987442 5114 scope.go:117] "RemoveContainer" containerID="66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a" Feb 16 00:11:41 crc kubenswrapper[5114]: E0216 00:11:41.987699 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a\": container with ID starting with 66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a not found: ID does not exist" containerID="66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.987740 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a"} err="failed to get container status \"66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a\": rpc error: code = NotFound desc = could not find container \"66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a\": container with ID starting with 66d7caefdd5ae1e5d32692182f032a8216751dc0d8bf6f98c70a4ff03ba5f47a not found: ID does not exist" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.987757 5114 scope.go:117] "RemoveContainer" containerID="99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c" Feb 16 00:11:41 crc kubenswrapper[5114]: E0216 00:11:41.988091 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c\": container with ID starting with 99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c not found: ID does not exist" containerID="99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.988135 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c"} err="failed to get container status \"99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c\": rpc error: code = NotFound desc = could not find container \"99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c\": container with ID starting with 99691c748fbd37a3b82bfd242427a6373367c2b390dac2d72f119182565af12c not found: ID does not exist" Feb 16 00:11:41 crc kubenswrapper[5114]: I0216 00:11:41.998847 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.083606 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-catalog-content\") pod \"0d296a72-b033-40b3-8652-128687b79c8e\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.083844 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-utilities\") pod \"0d296a72-b033-40b3-8652-128687b79c8e\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.085279 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-utilities" (OuterVolumeSpecName: "utilities") pod "0d296a72-b033-40b3-8652-128687b79c8e" (UID: "0d296a72-b033-40b3-8652-128687b79c8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.085470 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq7tj\" (UniqueName: \"kubernetes.io/projected/0d296a72-b033-40b3-8652-128687b79c8e-kube-api-access-qq7tj\") pod \"0d296a72-b033-40b3-8652-128687b79c8e\" (UID: \"0d296a72-b033-40b3-8652-128687b79c8e\") " Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.085731 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.099177 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d296a72-b033-40b3-8652-128687b79c8e-kube-api-access-qq7tj" (OuterVolumeSpecName: "kube-api-access-qq7tj") pod "0d296a72-b033-40b3-8652-128687b79c8e" (UID: "0d296a72-b033-40b3-8652-128687b79c8e"). InnerVolumeSpecName "kube-api-access-qq7tj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.151463 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d296a72-b033-40b3-8652-128687b79c8e" (UID: "0d296a72-b033-40b3-8652-128687b79c8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.170685 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.187813 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d296a72-b033-40b3-8652-128687b79c8e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.187881 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qq7tj\" (UniqueName: \"kubernetes.io/projected/0d296a72-b033-40b3-8652-128687b79c8e-kube-api-access-qq7tj\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.215239 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.603809 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.677977 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.920920 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-74kpp" event={"ID":"0d296a72-b033-40b3-8652-128687b79c8e","Type":"ContainerDied","Data":"49c6fc638881f276c49080b60c0a211d47ea71819f40efceac96c7582a614544"} Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.921013 5114 scope.go:117] "RemoveContainer" containerID="53b7874c843a62a9db7449dbc9bb4258967cdeb2def438895832f02fe961bb73" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.921465 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-74kpp" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.943151 5114 scope.go:117] "RemoveContainer" containerID="d9b307e43ba65eedc32453a44d162a9c8e6b5d10ad7f094d1ac32b121320e962" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.969470 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-74kpp"] Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.971550 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-74kpp"] Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.972880 5114 scope.go:117] "RemoveContainer" containerID="d9b96d9a56a035f36f44d26e97c781c44cd2e6b7ffc63e5ec875b3ef4151551c" Feb 16 00:11:42 crc kubenswrapper[5114]: I0216 00:11:42.974713 5114 ???:1] "http: TLS handshake error from 192.168.126.11:39836: no serving certificate available for the kubelet" Feb 16 00:11:43 crc kubenswrapper[5114]: I0216 00:11:43.833464 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d296a72-b033-40b3-8652-128687b79c8e" path="/var/lib/kubelet/pods/0d296a72-b033-40b3-8652-128687b79c8e/volumes" Feb 16 00:11:43 crc kubenswrapper[5114]: I0216 00:11:43.835315 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" path="/var/lib/kubelet/pods/8ceef617-8c1b-4c87-bca9-74b3a78f25fc/volumes" Feb 16 00:11:43 crc kubenswrapper[5114]: I0216 00:11:43.964346 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nplm7"] Feb 16 00:11:43 crc kubenswrapper[5114]: I0216 00:11:43.967343 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nplm7" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="registry-server" containerID="cri-o://ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312" gracePeriod=2 Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.403601 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.521933 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-utilities\") pod \"05de580f-e9d2-4045-9403-1fba0034fc3d\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.522003 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8zc6\" (UniqueName: \"kubernetes.io/projected/05de580f-e9d2-4045-9403-1fba0034fc3d-kube-api-access-s8zc6\") pod \"05de580f-e9d2-4045-9403-1fba0034fc3d\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.522031 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-catalog-content\") pod \"05de580f-e9d2-4045-9403-1fba0034fc3d\" (UID: \"05de580f-e9d2-4045-9403-1fba0034fc3d\") " Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.523373 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-utilities" (OuterVolumeSpecName: "utilities") pod "05de580f-e9d2-4045-9403-1fba0034fc3d" (UID: "05de580f-e9d2-4045-9403-1fba0034fc3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.530641 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05de580f-e9d2-4045-9403-1fba0034fc3d-kube-api-access-s8zc6" (OuterVolumeSpecName: "kube-api-access-s8zc6") pod "05de580f-e9d2-4045-9403-1fba0034fc3d" (UID: "05de580f-e9d2-4045-9403-1fba0034fc3d"). InnerVolumeSpecName "kube-api-access-s8zc6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.555745 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05de580f-e9d2-4045-9403-1fba0034fc3d" (UID: "05de580f-e9d2-4045-9403-1fba0034fc3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.623877 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.623922 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s8zc6\" (UniqueName: \"kubernetes.io/projected/05de580f-e9d2-4045-9403-1fba0034fc3d-kube-api-access-s8zc6\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.623934 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05de580f-e9d2-4045-9403-1fba0034fc3d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.940461 5114 generic.go:358] "Generic (PLEG): container finished" podID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerID="ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312" exitCode=0 Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.940560 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nplm7" event={"ID":"05de580f-e9d2-4045-9403-1fba0034fc3d","Type":"ContainerDied","Data":"ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312"} Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.940612 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nplm7" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.941214 5114 scope.go:117] "RemoveContainer" containerID="ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.941180 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nplm7" event={"ID":"05de580f-e9d2-4045-9403-1fba0034fc3d","Type":"ContainerDied","Data":"bf94e4081d4c792d1565708c09a1f87c089381a437042bf2e7c7e35d64ef24ec"} Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.965873 5114 scope.go:117] "RemoveContainer" containerID="bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802" Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.989046 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nplm7"] Feb 16 00:11:44 crc kubenswrapper[5114]: I0216 00:11:44.991405 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nplm7"] Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.010003 5114 scope.go:117] "RemoveContainer" containerID="4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.039756 5114 scope.go:117] "RemoveContainer" containerID="ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312" Feb 16 00:11:45 crc kubenswrapper[5114]: E0216 00:11:45.040301 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312\": container with ID starting with ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312 not found: ID does not exist" containerID="ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.040374 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312"} err="failed to get container status \"ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312\": rpc error: code = NotFound desc = could not find container \"ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312\": container with ID starting with ba29849fa1ef9dd713cbd4df3932e968f73dcb91fff42d7046005ba42dfa8312 not found: ID does not exist" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.040415 5114 scope.go:117] "RemoveContainer" containerID="bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802" Feb 16 00:11:45 crc kubenswrapper[5114]: E0216 00:11:45.040939 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802\": container with ID starting with bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802 not found: ID does not exist" containerID="bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.040983 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802"} err="failed to get container status \"bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802\": rpc error: code = NotFound desc = could not find container \"bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802\": container with ID starting with bf8ea2632821487b3e2db0d46a92039a5600a5854e97f33b4f88d7747b02a802 not found: ID does not exist" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.041011 5114 scope.go:117] "RemoveContainer" containerID="4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42" Feb 16 00:11:45 crc kubenswrapper[5114]: E0216 00:11:45.041302 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42\": container with ID starting with 4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42 not found: ID does not exist" containerID="4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.041325 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42"} err="failed to get container status \"4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42\": rpc error: code = NotFound desc = could not find container \"4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42\": container with ID starting with 4e408065c6c4d6470a2c1f9e19265295750ab573142c6423dd5d5adb38345f42 not found: ID does not exist" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.645651 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646407 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="extract-utilities" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646423 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="extract-utilities" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646431 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646437 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646455 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646460 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646476 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="extract-utilities" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646483 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="extract-utilities" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646489 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="extract-utilities" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646495 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="extract-utilities" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646502 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d81cb10-abbd-4c04-9632-446be1e89c2b" containerName="kube-multus-additional-cni-plugins" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646509 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d81cb10-abbd-4c04-9632-446be1e89c2b" containerName="kube-multus-additional-cni-plugins" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646522 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdc47cbe-a3d3-432a-b8bb-399a35be1822" containerName="image-pruner" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646528 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc47cbe-a3d3-432a-b8bb-399a35be1822" containerName="image-pruner" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646536 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="extract-content" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646542 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="extract-content" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646550 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="extract-content" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646557 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="extract-content" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646564 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="extract-content" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646570 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="extract-content" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646578 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646583 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646597 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1528149-fdf5-43a5-a3f9-14495b62437d" containerName="pruner" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646602 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1528149-fdf5-43a5-a3f9-14495b62437d" containerName="pruner" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646702 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="8ceef617-8c1b-4c87-bca9-74b3a78f25fc" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646711 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="a1528149-fdf5-43a5-a3f9-14495b62437d" containerName="pruner" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646720 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="bdc47cbe-a3d3-432a-b8bb-399a35be1822" containerName="image-pruner" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646728 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d81cb10-abbd-4c04-9632-446be1e89c2b" containerName="kube-multus-additional-cni-plugins" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646739 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="0d296a72-b033-40b3-8652-128687b79c8e" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.646747 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" containerName="registry-server" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.658976 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.665094 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.672678 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.685041 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.742277 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772423b8-2029-4ca9-92d9-74be05ce21a6-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"772423b8-2029-4ca9-92d9-74be05ce21a6\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.742335 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772423b8-2029-4ca9-92d9-74be05ce21a6-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"772423b8-2029-4ca9-92d9-74be05ce21a6\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.827732 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05de580f-e9d2-4045-9403-1fba0034fc3d" path="/var/lib/kubelet/pods/05de580f-e9d2-4045-9403-1fba0034fc3d/volumes" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.843861 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772423b8-2029-4ca9-92d9-74be05ce21a6-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"772423b8-2029-4ca9-92d9-74be05ce21a6\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.843938 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772423b8-2029-4ca9-92d9-74be05ce21a6-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"772423b8-2029-4ca9-92d9-74be05ce21a6\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.844040 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772423b8-2029-4ca9-92d9-74be05ce21a6-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"772423b8-2029-4ca9-92d9-74be05ce21a6\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.876842 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772423b8-2029-4ca9-92d9-74be05ce21a6-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"772423b8-2029-4ca9-92d9-74be05ce21a6\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:45 crc kubenswrapper[5114]: I0216 00:11:45.986763 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.368756 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ht4r"] Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.369439 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8ht4r" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="registry-server" containerID="cri-o://afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed" gracePeriod=2 Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.420464 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Feb 16 00:11:46 crc kubenswrapper[5114]: W0216 00:11:46.436949 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod772423b8_2029_4ca9_92d9_74be05ce21a6.slice/crio-2949e4336d3b85bbd70138e03bff9a720bdd00d21ecaf94d564753e991454589 WatchSource:0}: Error finding container 2949e4336d3b85bbd70138e03bff9a720bdd00d21ecaf94d564753e991454589: Status 404 returned error can't find the container with id 2949e4336d3b85bbd70138e03bff9a720bdd00d21ecaf94d564753e991454589 Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.798589 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.955647 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"772423b8-2029-4ca9-92d9-74be05ce21a6","Type":"ContainerStarted","Data":"3d82b5ca49e55dc287b1fbb75bbe4cc6c6a9edbd388b764715b3eb3171e1a7fa"} Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.955756 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"772423b8-2029-4ca9-92d9-74be05ce21a6","Type":"ContainerStarted","Data":"2949e4336d3b85bbd70138e03bff9a720bdd00d21ecaf94d564753e991454589"} Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.959086 5114 generic.go:358] "Generic (PLEG): container finished" podID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerID="afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed" exitCode=0 Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.959238 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ht4r" event={"ID":"b96e138d-614d-45ad-8cf4-2b68b9c05830","Type":"ContainerDied","Data":"afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed"} Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.959295 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ht4r" event={"ID":"b96e138d-614d-45ad-8cf4-2b68b9c05830","Type":"ContainerDied","Data":"19a51b1011d2ec6a9e66dd2ad1138ed09274fae6b76ae110bafc28d5d4980107"} Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.960134 5114 scope.go:117] "RemoveContainer" containerID="afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed" Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.960339 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ht4r" Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.965993 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t8ff\" (UniqueName: \"kubernetes.io/projected/b96e138d-614d-45ad-8cf4-2b68b9c05830-kube-api-access-7t8ff\") pod \"b96e138d-614d-45ad-8cf4-2b68b9c05830\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.966198 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-catalog-content\") pod \"b96e138d-614d-45ad-8cf4-2b68b9c05830\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.966542 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-utilities\") pod \"b96e138d-614d-45ad-8cf4-2b68b9c05830\" (UID: \"b96e138d-614d-45ad-8cf4-2b68b9c05830\") " Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.968260 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-utilities" (OuterVolumeSpecName: "utilities") pod "b96e138d-614d-45ad-8cf4-2b68b9c05830" (UID: "b96e138d-614d-45ad-8cf4-2b68b9c05830"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.980366 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96e138d-614d-45ad-8cf4-2b68b9c05830-kube-api-access-7t8ff" (OuterVolumeSpecName: "kube-api-access-7t8ff") pod "b96e138d-614d-45ad-8cf4-2b68b9c05830" (UID: "b96e138d-614d-45ad-8cf4-2b68b9c05830"). InnerVolumeSpecName "kube-api-access-7t8ff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:46 crc kubenswrapper[5114]: I0216 00:11:46.981843 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-12-crc" podStartSLOduration=1.9818180509999999 podStartE2EDuration="1.981818051s" podCreationTimestamp="2026-02-16 00:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:46.980015408 +0000 UTC m=+183.361292226" watchObservedRunningTime="2026-02-16 00:11:46.981818051 +0000 UTC m=+183.363094869" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.012755 5114 scope.go:117] "RemoveContainer" containerID="d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.031122 5114 scope.go:117] "RemoveContainer" containerID="e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.049611 5114 scope.go:117] "RemoveContainer" containerID="afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed" Feb 16 00:11:47 crc kubenswrapper[5114]: E0216 00:11:47.050278 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed\": container with ID starting with afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed not found: ID does not exist" containerID="afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.050328 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed"} err="failed to get container status \"afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed\": rpc error: code = NotFound desc = could not find container \"afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed\": container with ID starting with afbbf190c49c6e37e65772bd9e5f01719741f9f0556880b062c2d338967035ed not found: ID does not exist" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.050358 5114 scope.go:117] "RemoveContainer" containerID="d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad" Feb 16 00:11:47 crc kubenswrapper[5114]: E0216 00:11:47.050810 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad\": container with ID starting with d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad not found: ID does not exist" containerID="d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.050828 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad"} err="failed to get container status \"d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad\": rpc error: code = NotFound desc = could not find container \"d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad\": container with ID starting with d0f8bbfdbc4f981ed93a715289366647cf6df8838da00e642d7dd7abf2d708ad not found: ID does not exist" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.050842 5114 scope.go:117] "RemoveContainer" containerID="e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0" Feb 16 00:11:47 crc kubenswrapper[5114]: E0216 00:11:47.051317 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0\": container with ID starting with e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0 not found: ID does not exist" containerID="e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.051354 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0"} err="failed to get container status \"e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0\": rpc error: code = NotFound desc = could not find container \"e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0\": container with ID starting with e57b1d001afcfc2223585e33801fd05e81d5be38513795ff57a41af77a3db2d0 not found: ID does not exist" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.069451 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7t8ff\" (UniqueName: \"kubernetes.io/projected/b96e138d-614d-45ad-8cf4-2b68b9c05830-kube-api-access-7t8ff\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.069487 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.097628 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b96e138d-614d-45ad-8cf4-2b68b9c05830" (UID: "b96e138d-614d-45ad-8cf4-2b68b9c05830"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.170621 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e138d-614d-45ad-8cf4-2b68b9c05830-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.300365 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ht4r"] Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.303712 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8ht4r"] Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.830514 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" path="/var/lib/kubelet/pods/b96e138d-614d-45ad-8cf4-2b68b9c05830/volumes" Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.970411 5114 generic.go:358] "Generic (PLEG): container finished" podID="772423b8-2029-4ca9-92d9-74be05ce21a6" containerID="3d82b5ca49e55dc287b1fbb75bbe4cc6c6a9edbd388b764715b3eb3171e1a7fa" exitCode=0 Feb 16 00:11:47 crc kubenswrapper[5114]: I0216 00:11:47.970472 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"772423b8-2029-4ca9-92d9-74be05ce21a6","Type":"ContainerDied","Data":"3d82b5ca49e55dc287b1fbb75bbe4cc6c6a9edbd388b764715b3eb3171e1a7fa"} Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.272413 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.402361 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772423b8-2029-4ca9-92d9-74be05ce21a6-kubelet-dir\") pod \"772423b8-2029-4ca9-92d9-74be05ce21a6\" (UID: \"772423b8-2029-4ca9-92d9-74be05ce21a6\") " Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.402475 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/772423b8-2029-4ca9-92d9-74be05ce21a6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "772423b8-2029-4ca9-92d9-74be05ce21a6" (UID: "772423b8-2029-4ca9-92d9-74be05ce21a6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.402500 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772423b8-2029-4ca9-92d9-74be05ce21a6-kube-api-access\") pod \"772423b8-2029-4ca9-92d9-74be05ce21a6\" (UID: \"772423b8-2029-4ca9-92d9-74be05ce21a6\") " Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.403021 5114 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/772423b8-2029-4ca9-92d9-74be05ce21a6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.410801 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772423b8-2029-4ca9-92d9-74be05ce21a6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "772423b8-2029-4ca9-92d9-74be05ce21a6" (UID: "772423b8-2029-4ca9-92d9-74be05ce21a6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.503980 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/772423b8-2029-4ca9-92d9-74be05ce21a6-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.983019 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.983032 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"772423b8-2029-4ca9-92d9-74be05ce21a6","Type":"ContainerDied","Data":"2949e4336d3b85bbd70138e03bff9a720bdd00d21ecaf94d564753e991454589"} Feb 16 00:11:49 crc kubenswrapper[5114]: I0216 00:11:49.983570 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2949e4336d3b85bbd70138e03bff9a720bdd00d21ecaf94d564753e991454589" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.644446 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646220 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="extract-content" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646238 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="extract-content" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646293 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="registry-server" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646303 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="registry-server" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646316 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="extract-utilities" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646324 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="extract-utilities" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646360 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="772423b8-2029-4ca9-92d9-74be05ce21a6" containerName="pruner" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646366 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="772423b8-2029-4ca9-92d9-74be05ce21a6" containerName="pruner" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646571 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="b96e138d-614d-45ad-8cf4-2b68b9c05830" containerName="registry-server" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.646593 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="772423b8-2029-4ca9-92d9-74be05ce21a6" containerName="pruner" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.658526 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.664136 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.666349 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.666887 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.820018 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43f2d947-e8e7-4739-ade2-215a72259fd3-kube-api-access\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.820386 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-var-lock\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.820481 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-kubelet-dir\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.921779 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43f2d947-e8e7-4739-ade2-215a72259fd3-kube-api-access\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.922116 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-var-lock\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.922298 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-kubelet-dir\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.922237 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-var-lock\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.922408 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-kubelet-dir\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.947619 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43f2d947-e8e7-4739-ade2-215a72259fd3-kube-api-access\") pod \"installer-12-crc\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:50 crc kubenswrapper[5114]: I0216 00:11:50.993598 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:11:51 crc kubenswrapper[5114]: I0216 00:11:51.212378 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Feb 16 00:11:51 crc kubenswrapper[5114]: I0216 00:11:51.999435 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"43f2d947-e8e7-4739-ade2-215a72259fd3","Type":"ContainerStarted","Data":"15b84f1d59ac59688c1635aa9b10c85dd778c6aace1aaa2f0fa018d2ead42462"} Feb 16 00:11:51 crc kubenswrapper[5114]: I0216 00:11:51.999804 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"43f2d947-e8e7-4739-ade2-215a72259fd3","Type":"ContainerStarted","Data":"ddab52b1403b7244e73ff8084bdf3806c7d4e0bb05dd8d0193d7dd6ebce9c418"} Feb 16 00:11:58 crc kubenswrapper[5114]: I0216 00:11:58.489768 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=8.489745467 podStartE2EDuration="8.489745467s" podCreationTimestamp="2026-02-16 00:11:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:11:52.020913999 +0000 UTC m=+188.402190817" watchObservedRunningTime="2026-02-16 00:11:58.489745467 +0000 UTC m=+194.871022285" Feb 16 00:11:58 crc kubenswrapper[5114]: I0216 00:11:58.492716 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2jwtw"] Feb 16 00:11:58 crc kubenswrapper[5114]: I0216 00:11:58.780015 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Feb 16 00:12:23 crc kubenswrapper[5114]: I0216 00:12:23.545814 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" podUID="24991a86-e06b-4e9e-8992-50fbe36dfe01" containerName="oauth-openshift" containerID="cri-o://a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512" gracePeriod=15 Feb 16 00:12:23 crc kubenswrapper[5114]: I0216 00:12:23.971672 5114 ???:1] "http: TLS handshake error from 192.168.126.11:45554: no serving certificate available for the kubelet" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.148812 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.197873 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7"] Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.198588 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="24991a86-e06b-4e9e-8992-50fbe36dfe01" containerName="oauth-openshift" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.198613 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="24991a86-e06b-4e9e-8992-50fbe36dfe01" containerName="oauth-openshift" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.198752 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="24991a86-e06b-4e9e-8992-50fbe36dfe01" containerName="oauth-openshift" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.225449 5114 generic.go:358] "Generic (PLEG): container finished" podID="24991a86-e06b-4e9e-8992-50fbe36dfe01" containerID="a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512" exitCode=0 Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.260651 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bf7f\" (UniqueName: \"kubernetes.io/projected/24991a86-e06b-4e9e-8992-50fbe36dfe01-kube-api-access-9bf7f\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261185 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-policies\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261282 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-session\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261329 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-provider-selection\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261388 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-login\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261441 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-service-ca\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261480 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-serving-cert\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261562 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-error\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261684 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-cliconfig\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261761 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-router-certs\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261815 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-trusted-ca-bundle\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261860 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-dir\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261892 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-idp-0-file-data\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.261934 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-ocp-branding-template\") pod \"24991a86-e06b-4e9e-8992-50fbe36dfe01\" (UID: \"24991a86-e06b-4e9e-8992-50fbe36dfe01\") " Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.263166 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.264592 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.264805 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.265102 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.266482 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.271476 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.273262 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.273764 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24991a86-e06b-4e9e-8992-50fbe36dfe01-kube-api-access-9bf7f" (OuterVolumeSpecName: "kube-api-access-9bf7f") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "kube-api-access-9bf7f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.274012 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.274233 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.274496 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.274832 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.275470 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.278263 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "24991a86-e06b-4e9e-8992-50fbe36dfe01" (UID: "24991a86-e06b-4e9e-8992-50fbe36dfe01"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.280526 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" event={"ID":"24991a86-e06b-4e9e-8992-50fbe36dfe01","Type":"ContainerDied","Data":"a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512"} Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.280584 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" event={"ID":"24991a86-e06b-4e9e-8992-50fbe36dfe01","Type":"ContainerDied","Data":"82e19595520fa41374e48f699fce65dd9abf976787a9f4a89a0be6f5a8e74c19"} Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.280598 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7"] Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.280626 5114 scope.go:117] "RemoveContainer" containerID="a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.280626 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-2jwtw" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.281141 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.354700 5114 scope.go:117] "RemoveContainer" containerID="a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512" Feb 16 00:12:24 crc kubenswrapper[5114]: E0216 00:12:24.357349 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512\": container with ID starting with a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512 not found: ID does not exist" containerID="a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.357414 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512"} err="failed to get container status \"a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512\": rpc error: code = NotFound desc = could not find container \"a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512\": container with ID starting with a909e849998251b7d7c438bcd601ab4554f9a4e42b3c4b77518ae71b2671c512 not found: ID does not exist" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.359893 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2jwtw"] Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363179 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363208 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363213 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-2jwtw"] Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363226 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363336 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363418 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363648 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363781 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363865 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363910 5114 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363936 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.363969 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.364012 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9bf7f\" (UniqueName: \"kubernetes.io/projected/24991a86-e06b-4e9e-8992-50fbe36dfe01-kube-api-access-9bf7f\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.364038 5114 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/24991a86-e06b-4e9e-8992-50fbe36dfe01-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.364088 5114 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/24991a86-e06b-4e9e-8992-50fbe36dfe01-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.466512 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.466727 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.466763 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-error\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.466826 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.466870 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.466891 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-session\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.466946 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct6pb\" (UniqueName: \"kubernetes.io/projected/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-kube-api-access-ct6pb\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.467011 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.467047 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.467096 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-audit-dir\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.467149 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-login\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.467174 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-audit-policies\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.467220 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.467302 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569430 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569538 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-audit-dir\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569600 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-login\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569646 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-audit-policies\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569716 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569767 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569809 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569860 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569911 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-error\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.569969 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.570023 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.570058 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-session\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.570096 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ct6pb\" (UniqueName: \"kubernetes.io/projected/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-kube-api-access-ct6pb\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.570172 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.571518 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-service-ca\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.572552 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-audit-policies\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.572737 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-audit-dir\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.572971 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.575223 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.575789 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.577151 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.577877 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-login\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.579008 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.580053 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.580802 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-session\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.581035 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-system-router-certs\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.581065 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-v4-0-config-user-template-error\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.601039 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct6pb\" (UniqueName: \"kubernetes.io/projected/13f09bad-d7b1-47fe-8642-ac19b8b89e0c-kube-api-access-ct6pb\") pod \"oauth-openshift-7f698bf5d7-7vkm7\" (UID: \"13f09bad-d7b1-47fe-8642-ac19b8b89e0c\") " pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:24 crc kubenswrapper[5114]: I0216 00:12:24.633782 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:25 crc kubenswrapper[5114]: I0216 00:12:25.130017 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7"] Feb 16 00:12:25 crc kubenswrapper[5114]: I0216 00:12:25.233956 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" event={"ID":"13f09bad-d7b1-47fe-8642-ac19b8b89e0c","Type":"ContainerStarted","Data":"b7028f9ab583ece5804cc119bcdc906306362c0197703fbca2146143c273060d"} Feb 16 00:12:25 crc kubenswrapper[5114]: I0216 00:12:25.852612 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24991a86-e06b-4e9e-8992-50fbe36dfe01" path="/var/lib/kubelet/pods/24991a86-e06b-4e9e-8992-50fbe36dfe01/volumes" Feb 16 00:12:26 crc kubenswrapper[5114]: I0216 00:12:26.249594 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" event={"ID":"13f09bad-d7b1-47fe-8642-ac19b8b89e0c","Type":"ContainerStarted","Data":"bf885584b333c39c125f2e93c9384f8e6c29f993dc901ff3c6f44c4aaedce018"} Feb 16 00:12:26 crc kubenswrapper[5114]: I0216 00:12:26.251469 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:26 crc kubenswrapper[5114]: I0216 00:12:26.261954 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" Feb 16 00:12:26 crc kubenswrapper[5114]: I0216 00:12:26.285350 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7f698bf5d7-7vkm7" podStartSLOduration=28.285310755 podStartE2EDuration="28.285310755s" podCreationTimestamp="2026-02-16 00:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:12:26.283178546 +0000 UTC m=+222.664455394" watchObservedRunningTime="2026-02-16 00:12:26.285310755 +0000 UTC m=+222.666587613" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.674290 5114 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.691450 5114 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.692324 5114 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.692986 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a" gracePeriod=15 Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693228 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788" gracePeriod=15 Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693441 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556" gracePeriod=15 Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693585 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693621 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693508 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673" gracePeriod=15 Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693661 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693825 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693589 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e" gracePeriod=15 Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693848 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693915 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.693953 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694037 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694053 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694078 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694091 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694116 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694130 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694145 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694157 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694176 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694190 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694205 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694218 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694662 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694692 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694716 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694776 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694800 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694817 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694835 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.694858 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.695081 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.695103 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.695394 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.700376 5114 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.756817 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.756936 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.756975 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.757090 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.757223 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.757301 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.757393 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.757516 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.757592 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.757614 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.774029 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: E0216 00:12:29.775305 5114 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.859557 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.859805 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.859923 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860076 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860196 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860201 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860316 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860479 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860495 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860715 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860834 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860960 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860526 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.860558 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.861481 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.861614 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.861661 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.861726 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.861757 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:29 crc kubenswrapper[5114]: I0216 00:12:29.861865 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.076969 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:30 crc kubenswrapper[5114]: W0216 00:12:30.103077 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-cdf3c1f047b613c6dfd77e5b2b3dc83536ffed7011c7c3b2fb32deb0f1af693d WatchSource:0}: Error finding container cdf3c1f047b613c6dfd77e5b2b3dc83536ffed7011c7c3b2fb32deb0f1af693d: Status 404 returned error can't find the container with id cdf3c1f047b613c6dfd77e5b2b3dc83536ffed7011c7c3b2fb32deb0f1af693d Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.107810 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189491b2b009c935 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:12:30.107085109 +0000 UTC m=+226.488361957,LastTimestamp:2026-02-16 00:12:30.107085109 +0000 UTC m=+226.488361957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.147619 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.148500 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.149124 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.149721 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.150318 5114 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.150425 5114 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.150972 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="200ms" Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.284903 5114 generic.go:358] "Generic (PLEG): container finished" podID="43f2d947-e8e7-4739-ade2-215a72259fd3" containerID="15b84f1d59ac59688c1635aa9b10c85dd778c6aace1aaa2f0fa018d2ead42462" exitCode=0 Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.285030 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"43f2d947-e8e7-4739-ade2-215a72259fd3","Type":"ContainerDied","Data":"15b84f1d59ac59688c1635aa9b10c85dd778c6aace1aaa2f0fa018d2ead42462"} Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.286448 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.288895 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.292437 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.293171 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788" exitCode=0 Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.293198 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e" exitCode=0 Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.293210 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556" exitCode=0 Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.293219 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673" exitCode=2 Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.293299 5114 scope.go:117] "RemoveContainer" containerID="52f25b1258c4149dbea0aaf2c4ecf257d3b0389d8bbbcb7599c59c51cb7d97a6" Feb 16 00:12:30 crc kubenswrapper[5114]: I0216 00:12:30.297269 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"cdf3c1f047b613c6dfd77e5b2b3dc83536ffed7011c7c3b2fb32deb0f1af693d"} Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.352045 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="400ms" Feb 16 00:12:30 crc kubenswrapper[5114]: E0216 00:12:30.753340 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="800ms" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.309545 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.313662 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.313670 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"65cd0b319bb9215a66b777c6d9d793cb20755e692d89dfcc394d21349107bf4a"} Feb 16 00:12:31 crc kubenswrapper[5114]: E0216 00:12:31.314508 5114 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.314903 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:31 crc kubenswrapper[5114]: E0216 00:12:31.554475 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="1.6s" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.704066 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.705168 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.792607 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43f2d947-e8e7-4739-ade2-215a72259fd3-kube-api-access\") pod \"43f2d947-e8e7-4739-ade2-215a72259fd3\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.792814 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-var-lock\") pod \"43f2d947-e8e7-4739-ade2-215a72259fd3\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.793005 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-kubelet-dir\") pod \"43f2d947-e8e7-4739-ade2-215a72259fd3\" (UID: \"43f2d947-e8e7-4739-ade2-215a72259fd3\") " Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.793721 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-var-lock" (OuterVolumeSpecName: "var-lock") pod "43f2d947-e8e7-4739-ade2-215a72259fd3" (UID: "43f2d947-e8e7-4739-ade2-215a72259fd3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.793883 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "43f2d947-e8e7-4739-ade2-215a72259fd3" (UID: "43f2d947-e8e7-4739-ade2-215a72259fd3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.794156 5114 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.794184 5114 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/43f2d947-e8e7-4739-ade2-215a72259fd3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.824740 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43f2d947-e8e7-4739-ade2-215a72259fd3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "43f2d947-e8e7-4739-ade2-215a72259fd3" (UID: "43f2d947-e8e7-4739-ade2-215a72259fd3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:12:31 crc kubenswrapper[5114]: I0216 00:12:31.894988 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/43f2d947-e8e7-4739-ade2-215a72259fd3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.104472 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.105311 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.106466 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.106822 5114 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197100 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197172 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197222 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197230 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197284 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197351 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197379 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197388 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197653 5114 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197671 5114 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.197682 5114 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.198028 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.200498 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.299104 5114 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.299155 5114 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.323241 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.323230 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"43f2d947-e8e7-4739-ade2-215a72259fd3","Type":"ContainerDied","Data":"ddab52b1403b7244e73ff8084bdf3806c7d4e0bb05dd8d0193d7dd6ebce9c418"} Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.323386 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddab52b1403b7244e73ff8084bdf3806c7d4e0bb05dd8d0193d7dd6ebce9c418" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.329534 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.329734 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.330065 5114 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.331308 5114 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a" exitCode=0 Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.331392 5114 scope.go:117] "RemoveContainer" containerID="da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.331472 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.332787 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:32 crc kubenswrapper[5114]: E0216 00:12:32.334718 5114 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.356187 5114 scope.go:117] "RemoveContainer" containerID="777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.360358 5114 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.361091 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.383672 5114 scope.go:117] "RemoveContainer" containerID="6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.406813 5114 scope.go:117] "RemoveContainer" containerID="f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.438154 5114 scope.go:117] "RemoveContainer" containerID="c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.461068 5114 scope.go:117] "RemoveContainer" containerID="8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.545154 5114 scope.go:117] "RemoveContainer" containerID="da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788" Feb 16 00:12:32 crc kubenswrapper[5114]: E0216 00:12:32.546033 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788\": container with ID starting with da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788 not found: ID does not exist" containerID="da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.546918 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788"} err="failed to get container status \"da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788\": rpc error: code = NotFound desc = could not find container \"da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788\": container with ID starting with da4f7d2c40564806bdf7983d19efbc4c1c876759d4b909fbdbaca127f6609788 not found: ID does not exist" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.547078 5114 scope.go:117] "RemoveContainer" containerID="777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e" Feb 16 00:12:32 crc kubenswrapper[5114]: E0216 00:12:32.548043 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e\": container with ID starting with 777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e not found: ID does not exist" containerID="777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.548107 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e"} err="failed to get container status \"777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e\": rpc error: code = NotFound desc = could not find container \"777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e\": container with ID starting with 777d362c8b4b0a98cdb3b15892386839d71bc084a8d634594b3944d5898e086e not found: ID does not exist" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.548145 5114 scope.go:117] "RemoveContainer" containerID="6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556" Feb 16 00:12:32 crc kubenswrapper[5114]: E0216 00:12:32.548598 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556\": container with ID starting with 6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556 not found: ID does not exist" containerID="6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.548645 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556"} err="failed to get container status \"6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556\": rpc error: code = NotFound desc = could not find container \"6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556\": container with ID starting with 6e4088821a8f40c320afd59e6304dcb80368d03841eaf6b6cea1d7ba7ca0e556 not found: ID does not exist" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.548673 5114 scope.go:117] "RemoveContainer" containerID="f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673" Feb 16 00:12:32 crc kubenswrapper[5114]: E0216 00:12:32.549150 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673\": container with ID starting with f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673 not found: ID does not exist" containerID="f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.549209 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673"} err="failed to get container status \"f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673\": rpc error: code = NotFound desc = could not find container \"f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673\": container with ID starting with f8702849aec6686d6ebaed6fb9db7c023e25a8c6cb88be8eec7cfcccf2a1a673 not found: ID does not exist" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.549240 5114 scope.go:117] "RemoveContainer" containerID="c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a" Feb 16 00:12:32 crc kubenswrapper[5114]: E0216 00:12:32.549722 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a\": container with ID starting with c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a not found: ID does not exist" containerID="c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.549776 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a"} err="failed to get container status \"c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a\": rpc error: code = NotFound desc = could not find container \"c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a\": container with ID starting with c69bc73e8f6cb165fecd545e4585f0c16d2e1c50fed3b28b5f32254663031c3a not found: ID does not exist" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.549831 5114 scope.go:117] "RemoveContainer" containerID="8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889" Feb 16 00:12:32 crc kubenswrapper[5114]: E0216 00:12:32.550353 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\": container with ID starting with 8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889 not found: ID does not exist" containerID="8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889" Feb 16 00:12:32 crc kubenswrapper[5114]: I0216 00:12:32.550409 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889"} err="failed to get container status \"8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\": rpc error: code = NotFound desc = could not find container \"8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889\": container with ID starting with 8217fbf2a4b5be42ea737137f404c7d81bc0443ee963b1813d6691c210d85889 not found: ID does not exist" Feb 16 00:12:33 crc kubenswrapper[5114]: E0216 00:12:33.156118 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="3.2s" Feb 16 00:12:33 crc kubenswrapper[5114]: E0216 00:12:33.522233 5114 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.233:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189491b2b009c935 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-16 00:12:30.107085109 +0000 UTC m=+226.488361957,LastTimestamp:2026-02-16 00:12:30.107085109 +0000 UTC m=+226.488361957,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 16 00:12:33 crc kubenswrapper[5114]: I0216 00:12:33.826986 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Feb 16 00:12:35 crc kubenswrapper[5114]: I0216 00:12:35.824941 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:36 crc kubenswrapper[5114]: E0216 00:12:36.357609 5114 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.233:6443: connect: connection refused" interval="6.4s" Feb 16 00:12:36 crc kubenswrapper[5114]: E0216 00:12:36.909989 5114 desired_state_of_world_populator.go:305] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" volumeName="registry-storage" Feb 16 00:12:40 crc kubenswrapper[5114]: I0216 00:12:40.816219 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:40 crc kubenswrapper[5114]: I0216 00:12:40.817932 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:40 crc kubenswrapper[5114]: I0216 00:12:40.843291 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36e77927-3498-4ebe-bcc5-62b9ddc1ae34" Feb 16 00:12:40 crc kubenswrapper[5114]: I0216 00:12:40.843340 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36e77927-3498-4ebe-bcc5-62b9ddc1ae34" Feb 16 00:12:40 crc kubenswrapper[5114]: E0216 00:12:40.844080 5114 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:40 crc kubenswrapper[5114]: I0216 00:12:40.844444 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:40 crc kubenswrapper[5114]: W0216 00:12:40.876660 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-70e2dca55e618e18683a64fe2b9e25ad7c9663a54591c36368544e63cca94e4b WatchSource:0}: Error finding container 70e2dca55e618e18683a64fe2b9e25ad7c9663a54591c36368544e63cca94e4b: Status 404 returned error can't find the container with id 70e2dca55e618e18683a64fe2b9e25ad7c9663a54591c36368544e63cca94e4b Feb 16 00:12:41 crc kubenswrapper[5114]: I0216 00:12:41.405545 5114 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="c58f2a0696eef19df17c174183b5dd04ce8fc9d358537e4c23022e2f838e9179" exitCode=0 Feb 16 00:12:41 crc kubenswrapper[5114]: I0216 00:12:41.405670 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"c58f2a0696eef19df17c174183b5dd04ce8fc9d358537e4c23022e2f838e9179"} Feb 16 00:12:41 crc kubenswrapper[5114]: I0216 00:12:41.406178 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"70e2dca55e618e18683a64fe2b9e25ad7c9663a54591c36368544e63cca94e4b"} Feb 16 00:12:41 crc kubenswrapper[5114]: I0216 00:12:41.406695 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36e77927-3498-4ebe-bcc5-62b9ddc1ae34" Feb 16 00:12:41 crc kubenswrapper[5114]: I0216 00:12:41.406727 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36e77927-3498-4ebe-bcc5-62b9ddc1ae34" Feb 16 00:12:41 crc kubenswrapper[5114]: E0216 00:12:41.407229 5114 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:41 crc kubenswrapper[5114]: I0216 00:12:41.407286 5114 status_manager.go:895] "Failed to get status for pod" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.233:6443: connect: connection refused" Feb 16 00:12:42 crc kubenswrapper[5114]: I0216 00:12:42.426202 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"33f4410cc9597ef340bee808e0d42bada1f17f2ece58525aab7a8ec00e68b034"} Feb 16 00:12:42 crc kubenswrapper[5114]: I0216 00:12:42.426804 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"18028ba3f6b4281db1d5b35399f6010ceaf0f56c026d7da0f5ad47d83be3f5c1"} Feb 16 00:12:42 crc kubenswrapper[5114]: I0216 00:12:42.426819 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"740ddfde06d0c567b55b584619caf63c761b15006b1ca441d343f6263d622bb4"} Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.446759 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.447067 5114 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda" exitCode=1 Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.447356 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda"} Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.448748 5114 scope.go:117] "RemoveContainer" containerID="af7e6b510463af6632201d7d15d32ad85785d27c4eb97b677fd12c7b8aa6ffda" Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.455724 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"25a107ba419fe91ea33041adb4724494d9336a671942a436aac39b7f2d822f40"} Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.455787 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"76c1c31e2732f09d826e6179d5911d97f6ff8e3ddd760c231de7ad9eddf713eb"} Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.455969 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.456048 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36e77927-3498-4ebe-bcc5-62b9ddc1ae34" Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.456066 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36e77927-3498-4ebe-bcc5-62b9ddc1ae34" Feb 16 00:12:43 crc kubenswrapper[5114]: I0216 00:12:43.908613 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:12:44 crc kubenswrapper[5114]: I0216 00:12:44.467587 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:12:44 crc kubenswrapper[5114]: I0216 00:12:44.467708 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"bcdecaedbe74a68aa0b1095fb1c1c4e076dcf63fece0790a7d35e08d7240063c"} Feb 16 00:12:45 crc kubenswrapper[5114]: I0216 00:12:45.844745 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:45 crc kubenswrapper[5114]: I0216 00:12:45.845202 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:45 crc kubenswrapper[5114]: I0216 00:12:45.854559 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:49 crc kubenswrapper[5114]: I0216 00:12:49.024238 5114 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:49 crc kubenswrapper[5114]: I0216 00:12:49.025363 5114 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:12:49 crc kubenswrapper[5114]: I0216 00:12:49.180635 5114 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="2f872a5b-3b9d-4125-8301-652f2bf68596" Feb 16 00:12:49 crc kubenswrapper[5114]: I0216 00:12:49.499604 5114 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36e77927-3498-4ebe-bcc5-62b9ddc1ae34" Feb 16 00:12:49 crc kubenswrapper[5114]: I0216 00:12:49.499857 5114 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="36e77927-3498-4ebe-bcc5-62b9ddc1ae34" Feb 16 00:12:49 crc kubenswrapper[5114]: I0216 00:12:49.502954 5114 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="2f872a5b-3b9d-4125-8301-652f2bf68596" Feb 16 00:12:50 crc kubenswrapper[5114]: I0216 00:12:50.084927 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:12:50 crc kubenswrapper[5114]: I0216 00:12:50.092275 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:12:50 crc kubenswrapper[5114]: I0216 00:12:50.940831 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:12:50 crc kubenswrapper[5114]: I0216 00:12:50.951057 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:12:51 crc kubenswrapper[5114]: I0216 00:12:51.514201 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:12:55 crc kubenswrapper[5114]: I0216 00:12:55.937473 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Feb 16 00:12:56 crc kubenswrapper[5114]: I0216 00:12:56.180893 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Feb 16 00:12:56 crc kubenswrapper[5114]: I0216 00:12:56.751363 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Feb 16 00:12:56 crc kubenswrapper[5114]: I0216 00:12:56.986698 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Feb 16 00:12:58 crc kubenswrapper[5114]: I0216 00:12:58.157217 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Feb 16 00:12:58 crc kubenswrapper[5114]: I0216 00:12:58.412757 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Feb 16 00:12:58 crc kubenswrapper[5114]: I0216 00:12:58.531340 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Feb 16 00:12:58 crc kubenswrapper[5114]: I0216 00:12:58.901123 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Feb 16 00:12:59 crc kubenswrapper[5114]: I0216 00:12:59.315444 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.238997 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.398605 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.545761 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.623218 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.674024 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.753917 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.773088 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.827378 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.893298 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Feb 16 00:13:00 crc kubenswrapper[5114]: I0216 00:13:00.920782 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Feb 16 00:13:01 crc kubenswrapper[5114]: I0216 00:13:01.563392 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Feb 16 00:13:01 crc kubenswrapper[5114]: I0216 00:13:01.595895 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Feb 16 00:13:01 crc kubenswrapper[5114]: I0216 00:13:01.609728 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Feb 16 00:13:01 crc kubenswrapper[5114]: I0216 00:13:01.747194 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Feb 16 00:13:01 crc kubenswrapper[5114]: I0216 00:13:01.796013 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Feb 16 00:13:01 crc kubenswrapper[5114]: I0216 00:13:01.845932 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Feb 16 00:13:01 crc kubenswrapper[5114]: I0216 00:13:01.957488 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.017210 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.040176 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.080665 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.529152 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.592413 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.675776 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.697654 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.827743 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Feb 16 00:13:02 crc kubenswrapper[5114]: I0216 00:13:02.957876 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.286799 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.301733 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.391618 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.415644 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.551799 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.573904 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.625601 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.651966 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.680985 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.688049 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Feb 16 00:13:03 crc kubenswrapper[5114]: I0216 00:13:03.776350 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.217837 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.238030 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.245969 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.251784 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.319807 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.326019 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.333205 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.380819 5114 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.381587 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.381777 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.395272 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.510833 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.556526 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.557137 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.561116 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.587305 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.708351 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.709744 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.764487 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.817333 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.819172 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.970463 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Feb 16 00:13:04 crc kubenswrapper[5114]: I0216 00:13:04.983332 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.044902 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.045744 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.164669 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.210824 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.229388 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.338695 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.364096 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.379006 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.528198 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.536691 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.542414 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.599050 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.642725 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.676576 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.706296 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.708290 5114 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.746485 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.837297 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Feb 16 00:13:05 crc kubenswrapper[5114]: I0216 00:13:05.893909 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.051380 5114 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.068290 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.209402 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.223968 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.418507 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.426279 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.426550 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.469768 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.572134 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.650202 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.686506 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.722872 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.724346 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.769229 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.800574 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.828091 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.885728 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Feb 16 00:13:06 crc kubenswrapper[5114]: I0216 00:13:06.944783 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.139092 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.201151 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.238907 5114 ???:1] "http: TLS handshake error from 192.168.126.11:40652: no serving certificate available for the kubelet" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.348354 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.412131 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.543115 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.580773 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.590980 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.616646 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.624877 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.672188 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.694047 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.776131 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.776192 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.803044 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.879460 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.981205 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Feb 16 00:13:07 crc kubenswrapper[5114]: I0216 00:13:07.992177 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.015619 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.040010 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.119149 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.280009 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.320667 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.348661 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.407159 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.417513 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.530714 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.615463 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.647858 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.838952 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.870321 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.911748 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.933432 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Feb 16 00:13:08 crc kubenswrapper[5114]: I0216 00:13:08.975804 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.042288 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.043319 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.145955 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.215881 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.218571 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.265191 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.286941 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.314781 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.345904 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.454336 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.594740 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.653477 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.685542 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.693224 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.700501 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.813367 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.838551 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.872219 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Feb 16 00:13:09 crc kubenswrapper[5114]: I0216 00:13:09.948549 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.035933 5114 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.043089 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.043154 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.051166 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.052501 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.067969 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=21.067948497 podStartE2EDuration="21.067948497s" podCreationTimestamp="2026-02-16 00:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:13:10.066021254 +0000 UTC m=+266.447298072" watchObservedRunningTime="2026-02-16 00:13:10.067948497 +0000 UTC m=+266.449225315" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.121154 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.148427 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.199963 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.225082 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.225082 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.277813 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.330001 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.374584 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.424769 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.489571 5114 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.490140 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://65cd0b319bb9215a66b777c6d9d793cb20755e692d89dfcc394d21349107bf4a" gracePeriod=5 Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.494309 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.591008 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.643042 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.684018 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.719153 5114 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.719584 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Feb 16 00:13:10 crc kubenswrapper[5114]: I0216 00:13:10.790082 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.049488 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.147375 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.233028 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.245480 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.314726 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.353899 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.354720 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.507839 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.596781 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.682592 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.705892 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.789554 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.791944 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.863485 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.868238 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Feb 16 00:13:11 crc kubenswrapper[5114]: I0216 00:13:11.928096 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.044848 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.067123 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.335096 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.393446 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.464715 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.599670 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.667732 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.715844 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.720980 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.763316 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.816131 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:12 crc kubenswrapper[5114]: I0216 00:13:12.917726 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.051190 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.066979 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.123777 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.522216 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.546466 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.565537 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.587076 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.594313 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.677505 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.777741 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.854219 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.858380 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.881161 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Feb 16 00:13:13 crc kubenswrapper[5114]: I0216 00:13:13.895118 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.012928 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.042135 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.084882 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.156375 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.164796 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.294830 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.323700 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.579486 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.827217 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.848708 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.850140 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:14 crc kubenswrapper[5114]: I0216 00:13:14.888545 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.089539 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.158500 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.178746 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.270082 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.285224 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.294461 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.321120 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.429093 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.492649 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.667143 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.671957 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.672008 5114 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="65cd0b319bb9215a66b777c6d9d793cb20755e692d89dfcc394d21349107bf4a" exitCode=137 Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.714610 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Feb 16 00:13:15 crc kubenswrapper[5114]: I0216 00:13:15.903724 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.048703 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.089059 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.089163 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.091094 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.158325 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.158411 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.158539 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.158821 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.158870 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.158892 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.158911 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.158952 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.159159 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.171865 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.214770 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.258418 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.260133 5114 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.260155 5114 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.260166 5114 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.260178 5114 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.260188 5114 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.367929 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.414601 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.550996 5114 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.586786 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.644123 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.680826 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.681005 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.681061 5114 scope.go:117] "RemoveContainer" containerID="65cd0b319bb9215a66b777c6d9d793cb20755e692d89dfcc394d21349107bf4a" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.702004 5114 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Feb 16 00:13:16 crc kubenswrapper[5114]: I0216 00:13:16.777569 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Feb 16 00:13:17 crc kubenswrapper[5114]: I0216 00:13:17.008187 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Feb 16 00:13:17 crc kubenswrapper[5114]: I0216 00:13:17.137791 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Feb 16 00:13:17 crc kubenswrapper[5114]: I0216 00:13:17.450892 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Feb 16 00:13:17 crc kubenswrapper[5114]: I0216 00:13:17.825094 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Feb 16 00:13:17 crc kubenswrapper[5114]: I0216 00:13:17.859019 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Feb 16 00:13:18 crc kubenswrapper[5114]: I0216 00:13:18.368761 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 16 00:13:20 crc kubenswrapper[5114]: I0216 00:13:20.085279 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:13:20 crc kubenswrapper[5114]: I0216 00:13:20.085394 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.308342 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.310199 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" podUID="b25d038c-e025-44e6-8bf4-c0334cd5bab4" containerName="route-controller-manager" containerID="cri-o://5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680" gracePeriod=30 Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.313789 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-skdc2"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.314533 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" podUID="85ed4f0e-0187-43d7-a456-eb14ee69d614" containerName="controller-manager" containerID="cri-o://6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792" gracePeriod=30 Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.694536 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.708701 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.742215 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.742932 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="85ed4f0e-0187-43d7-a456-eb14ee69d614" containerName="controller-manager" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.742953 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ed4f0e-0187-43d7-a456-eb14ee69d614" containerName="controller-manager" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.742966 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b25d038c-e025-44e6-8bf4-c0334cd5bab4" containerName="route-controller-manager" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.742974 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b25d038c-e025-44e6-8bf4-c0334cd5bab4" containerName="route-controller-manager" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.742983 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.742990 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.743004 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" containerName="installer" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.743010 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" containerName="installer" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.743123 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.743135 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="43f2d947-e8e7-4739-ade2-215a72259fd3" containerName="installer" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.743149 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="b25d038c-e025-44e6-8bf4-c0334cd5bab4" containerName="route-controller-manager" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.743161 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="85ed4f0e-0187-43d7-a456-eb14ee69d614" containerName="controller-manager" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.749935 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.762149 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.771060 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b25d038c-e025-44e6-8bf4-c0334cd5bab4-tmp\") pod \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.771460 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-proxy-ca-bundles\") pod \"85ed4f0e-0187-43d7-a456-eb14ee69d614\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.771563 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-client-ca\") pod \"85ed4f0e-0187-43d7-a456-eb14ee69d614\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.771809 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cpsq\" (UniqueName: \"kubernetes.io/projected/85ed4f0e-0187-43d7-a456-eb14ee69d614-kube-api-access-5cpsq\") pod \"85ed4f0e-0187-43d7-a456-eb14ee69d614\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.772002 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-config\") pod \"85ed4f0e-0187-43d7-a456-eb14ee69d614\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.772112 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-config\") pod \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.772215 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-client-ca\") pod \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.772301 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b25d038c-e025-44e6-8bf4-c0334cd5bab4-serving-cert\") pod \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.772417 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcmqs\" (UniqueName: \"kubernetes.io/projected/b25d038c-e025-44e6-8bf4-c0334cd5bab4-kube-api-access-kcmqs\") pod \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\" (UID: \"b25d038c-e025-44e6-8bf4-c0334cd5bab4\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.772512 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ed4f0e-0187-43d7-a456-eb14ee69d614-serving-cert\") pod \"85ed4f0e-0187-43d7-a456-eb14ee69d614\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.772661 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ed4f0e-0187-43d7-a456-eb14ee69d614-tmp\") pod \"85ed4f0e-0187-43d7-a456-eb14ee69d614\" (UID: \"85ed4f0e-0187-43d7-a456-eb14ee69d614\") " Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.774388 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85ed4f0e-0187-43d7-a456-eb14ee69d614-tmp" (OuterVolumeSpecName: "tmp") pod "85ed4f0e-0187-43d7-a456-eb14ee69d614" (UID: "85ed4f0e-0187-43d7-a456-eb14ee69d614"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.774751 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b25d038c-e025-44e6-8bf4-c0334cd5bab4-tmp" (OuterVolumeSpecName: "tmp") pod "b25d038c-e025-44e6-8bf4-c0334cd5bab4" (UID: "b25d038c-e025-44e6-8bf4-c0334cd5bab4"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.775732 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "85ed4f0e-0187-43d7-a456-eb14ee69d614" (UID: "85ed4f0e-0187-43d7-a456-eb14ee69d614"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.776904 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-client-ca" (OuterVolumeSpecName: "client-ca") pod "b25d038c-e025-44e6-8bf4-c0334cd5bab4" (UID: "b25d038c-e025-44e6-8bf4-c0334cd5bab4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.777319 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-config" (OuterVolumeSpecName: "config") pod "85ed4f0e-0187-43d7-a456-eb14ee69d614" (UID: "85ed4f0e-0187-43d7-a456-eb14ee69d614"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.777803 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-client-ca" (OuterVolumeSpecName: "client-ca") pod "85ed4f0e-0187-43d7-a456-eb14ee69d614" (UID: "85ed4f0e-0187-43d7-a456-eb14ee69d614"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.778884 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-config" (OuterVolumeSpecName: "config") pod "b25d038c-e025-44e6-8bf4-c0334cd5bab4" (UID: "b25d038c-e025-44e6-8bf4-c0334cd5bab4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.793791 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-795f48dcf9-mxxrx"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.796120 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b25d038c-e025-44e6-8bf4-c0334cd5bab4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b25d038c-e025-44e6-8bf4-c0334cd5bab4" (UID: "b25d038c-e025-44e6-8bf4-c0334cd5bab4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.798717 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b25d038c-e025-44e6-8bf4-c0334cd5bab4-kube-api-access-kcmqs" (OuterVolumeSpecName: "kube-api-access-kcmqs") pod "b25d038c-e025-44e6-8bf4-c0334cd5bab4" (UID: "b25d038c-e025-44e6-8bf4-c0334cd5bab4"). InnerVolumeSpecName "kube-api-access-kcmqs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.799724 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ed4f0e-0187-43d7-a456-eb14ee69d614-kube-api-access-5cpsq" (OuterVolumeSpecName: "kube-api-access-5cpsq") pod "85ed4f0e-0187-43d7-a456-eb14ee69d614" (UID: "85ed4f0e-0187-43d7-a456-eb14ee69d614"). InnerVolumeSpecName "kube-api-access-5cpsq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.800769 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85ed4f0e-0187-43d7-a456-eb14ee69d614-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "85ed4f0e-0187-43d7-a456-eb14ee69d614" (UID: "85ed4f0e-0187-43d7-a456-eb14ee69d614"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.803694 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.810136 5114 generic.go:358] "Generic (PLEG): container finished" podID="b25d038c-e025-44e6-8bf4-c0334cd5bab4" containerID="5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680" exitCode=0 Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.810298 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" event={"ID":"b25d038c-e025-44e6-8bf4-c0334cd5bab4","Type":"ContainerDied","Data":"5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680"} Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.810336 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" event={"ID":"b25d038c-e025-44e6-8bf4-c0334cd5bab4","Type":"ContainerDied","Data":"4c2885367c4db550426da3390fce4f11f262f94af4365e1b367267916626aed3"} Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.810378 5114 scope.go:117] "RemoveContainer" containerID="5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.810547 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.812344 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-795f48dcf9-mxxrx"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.813637 5114 generic.go:358] "Generic (PLEG): container finished" podID="85ed4f0e-0187-43d7-a456-eb14ee69d614" containerID="6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792" exitCode=0 Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.813699 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" event={"ID":"85ed4f0e-0187-43d7-a456-eb14ee69d614","Type":"ContainerDied","Data":"6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792"} Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.813729 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" event={"ID":"85ed4f0e-0187-43d7-a456-eb14ee69d614","Type":"ContainerDied","Data":"9ef2d6e4750a4338431ce1c06e6d559a868ea5b6abf2614eb84c1f8c9db76ca4"} Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.813834 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-skdc2" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.836786 5114 scope.go:117] "RemoveContainer" containerID="5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680" Feb 16 00:13:36 crc kubenswrapper[5114]: E0216 00:13:36.842224 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680\": container with ID starting with 5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680 not found: ID does not exist" containerID="5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.842390 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680"} err="failed to get container status \"5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680\": rpc error: code = NotFound desc = could not find container \"5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680\": container with ID starting with 5e1e78c9d0d05fbc125a6106f4186cbe84d9081238f54c7fa41e3b63c7bc2680 not found: ID does not exist" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.842493 5114 scope.go:117] "RemoveContainer" containerID="6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.868550 5114 scope.go:117] "RemoveContainer" containerID="6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.872367 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw"] Feb 16 00:13:36 crc kubenswrapper[5114]: E0216 00:13:36.872683 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792\": container with ID starting with 6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792 not found: ID does not exist" containerID="6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.872716 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792"} err="failed to get container status \"6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792\": rpc error: code = NotFound desc = could not find container \"6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792\": container with ID starting with 6f9e9fbc222b9fb4adf99ba2aea751991693ac5c3212365186aa59098c6c0792 not found: ID does not exist" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.873897 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-gldqw"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.874677 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-client-ca\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.874724 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-proxy-ca-bundles\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.874789 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-tmp\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.874858 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-serving-cert\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.874884 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr4rt\" (UniqueName: \"kubernetes.io/projected/bdfe9f06-7468-49c8-b189-9130103092c5-kube-api-access-sr4rt\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.874924 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-client-ca\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.874956 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cxcd\" (UniqueName: \"kubernetes.io/projected/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-kube-api-access-9cxcd\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.874982 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdfe9f06-7468-49c8-b189-9130103092c5-tmp\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875013 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-config\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875060 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdfe9f06-7468-49c8-b189-9130103092c5-serving-cert\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875089 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-config\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875135 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875147 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875156 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b25d038c-e025-44e6-8bf4-c0334cd5bab4-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875166 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b25d038c-e025-44e6-8bf4-c0334cd5bab4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875176 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kcmqs\" (UniqueName: \"kubernetes.io/projected/b25d038c-e025-44e6-8bf4-c0334cd5bab4-kube-api-access-kcmqs\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875189 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85ed4f0e-0187-43d7-a456-eb14ee69d614-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875197 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/85ed4f0e-0187-43d7-a456-eb14ee69d614-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875207 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b25d038c-e025-44e6-8bf4-c0334cd5bab4-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875216 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875226 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/85ed4f0e-0187-43d7-a456-eb14ee69d614-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.875235 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5cpsq\" (UniqueName: \"kubernetes.io/projected/85ed4f0e-0187-43d7-a456-eb14ee69d614-kube-api-access-5cpsq\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.883668 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-skdc2"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.887700 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-skdc2"] Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.976116 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-proxy-ca-bundles\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.976195 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-tmp\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.976233 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-serving-cert\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.976362 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sr4rt\" (UniqueName: \"kubernetes.io/projected/bdfe9f06-7468-49c8-b189-9130103092c5-kube-api-access-sr4rt\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.977015 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-tmp\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.977469 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-client-ca\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.977651 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9cxcd\" (UniqueName: \"kubernetes.io/projected/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-kube-api-access-9cxcd\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.977777 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdfe9f06-7468-49c8-b189-9130103092c5-tmp\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.978688 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-config\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.979090 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-client-ca\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.978369 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-proxy-ca-bundles\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.978607 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdfe9f06-7468-49c8-b189-9130103092c5-tmp\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.979731 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdfe9f06-7468-49c8-b189-9130103092c5-serving-cert\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.979865 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-config\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.980091 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-config\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.980918 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-serving-cert\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.981601 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-client-ca\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.983049 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-client-ca\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.983583 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-config\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.983722 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdfe9f06-7468-49c8-b189-9130103092c5-serving-cert\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:36 crc kubenswrapper[5114]: I0216 00:13:36.995606 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cxcd\" (UniqueName: \"kubernetes.io/projected/be9af5a2-e0e9-4721-90c5-cf06bcd7c31d-kube-api-access-9cxcd\") pod \"route-controller-manager-6c65cf7b85-5xng4\" (UID: \"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d\") " pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.004157 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr4rt\" (UniqueName: \"kubernetes.io/projected/bdfe9f06-7468-49c8-b189-9130103092c5-kube-api-access-sr4rt\") pod \"controller-manager-795f48dcf9-mxxrx\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.075644 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.147787 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.355970 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4"] Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.436615 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-795f48dcf9-mxxrx"] Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.834011 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ed4f0e-0187-43d7-a456-eb14ee69d614" path="/var/lib/kubelet/pods/85ed4f0e-0187-43d7-a456-eb14ee69d614/volumes" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.840804 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b25d038c-e025-44e6-8bf4-c0334cd5bab4" path="/var/lib/kubelet/pods/b25d038c-e025-44e6-8bf4-c0334cd5bab4/volumes" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.841966 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.842016 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.842038 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" event={"ID":"bdfe9f06-7468-49c8-b189-9130103092c5","Type":"ContainerStarted","Data":"c441dba2a86eac74dd38ecc255e9996bc5e2d46f37bcf8c5785b07c0e6e027e4"} Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.842063 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" event={"ID":"bdfe9f06-7468-49c8-b189-9130103092c5","Type":"ContainerStarted","Data":"b2018280547f64de37c763cd2dde75f314ed2a28c60c9b14937d1d541bb0001f"} Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.842090 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" event={"ID":"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d","Type":"ContainerStarted","Data":"898bfc9c6d820f24032a5f0f158c075bc0f2f4147da7dbfd576f43da6a067c1f"} Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.842109 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" event={"ID":"be9af5a2-e0e9-4721-90c5-cf06bcd7c31d","Type":"ContainerStarted","Data":"e5aa5ca1715f1dde55d8629b4767aead96d0533a2f3878039c44aed16fc03126"} Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.859199 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" podStartSLOduration=1.859165107 podStartE2EDuration="1.859165107s" podCreationTimestamp="2026-02-16 00:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:13:37.856158703 +0000 UTC m=+294.237435561" watchObservedRunningTime="2026-02-16 00:13:37.859165107 +0000 UTC m=+294.240441965" Feb 16 00:13:37 crc kubenswrapper[5114]: I0216 00:13:37.893125 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" podStartSLOduration=1.8931018499999999 podStartE2EDuration="1.89310185s" podCreationTimestamp="2026-02-16 00:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:13:37.890123088 +0000 UTC m=+294.271399936" watchObservedRunningTime="2026-02-16 00:13:37.89310185 +0000 UTC m=+294.274378698" Feb 16 00:13:38 crc kubenswrapper[5114]: I0216 00:13:38.164358 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:38 crc kubenswrapper[5114]: I0216 00:13:38.531219 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c65cf7b85-5xng4" Feb 16 00:13:45 crc kubenswrapper[5114]: I0216 00:13:45.924229 5114 ???:1] "http: TLS handshake error from 192.168.126.11:52956: no serving certificate available for the kubelet" Feb 16 00:13:46 crc kubenswrapper[5114]: I0216 00:13:46.050644 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:13:46 crc kubenswrapper[5114]: I0216 00:13:46.050848 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.084885 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.085922 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.086019 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.087233 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e129ae4ee7d3742ba2d538ce3a74a1fc75d899264cde2462cc24760ecb7481d2"} pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.087403 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" containerID="cri-o://e129ae4ee7d3742ba2d538ce3a74a1fc75d899264cde2462cc24760ecb7481d2" gracePeriod=600 Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.232638 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.925852 5114 generic.go:358] "Generic (PLEG): container finished" podID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerID="e129ae4ee7d3742ba2d538ce3a74a1fc75d899264cde2462cc24760ecb7481d2" exitCode=0 Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.925933 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerDied","Data":"e129ae4ee7d3742ba2d538ce3a74a1fc75d899264cde2462cc24760ecb7481d2"} Feb 16 00:13:50 crc kubenswrapper[5114]: I0216 00:13:50.926656 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"8a3ce095df471cd9bc6cb7b32e5ca37c749a18ef9c74e7e6da2f540e061ab35d"} Feb 16 00:13:56 crc kubenswrapper[5114]: I0216 00:13:56.322120 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-795f48dcf9-mxxrx"] Feb 16 00:13:56 crc kubenswrapper[5114]: I0216 00:13:56.323378 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" podUID="bdfe9f06-7468-49c8-b189-9130103092c5" containerName="controller-manager" containerID="cri-o://c441dba2a86eac74dd38ecc255e9996bc5e2d46f37bcf8c5785b07c0e6e027e4" gracePeriod=30 Feb 16 00:13:56 crc kubenswrapper[5114]: I0216 00:13:56.971480 5114 generic.go:358] "Generic (PLEG): container finished" podID="bdfe9f06-7468-49c8-b189-9130103092c5" containerID="c441dba2a86eac74dd38ecc255e9996bc5e2d46f37bcf8c5785b07c0e6e027e4" exitCode=0 Feb 16 00:13:56 crc kubenswrapper[5114]: I0216 00:13:56.971591 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" event={"ID":"bdfe9f06-7468-49c8-b189-9130103092c5","Type":"ContainerDied","Data":"c441dba2a86eac74dd38ecc255e9996bc5e2d46f37bcf8c5785b07c0e6e027e4"} Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.076396 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.163126 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5cfbf5d896-djgkx"] Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.163873 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bdfe9f06-7468-49c8-b189-9130103092c5" containerName="controller-manager" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.163893 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfe9f06-7468-49c8-b189-9130103092c5" containerName="controller-manager" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.164036 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="bdfe9f06-7468-49c8-b189-9130103092c5" containerName="controller-manager" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.171854 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.177956 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cfbf5d896-djgkx"] Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.190912 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdfe9f06-7468-49c8-b189-9130103092c5-serving-cert\") pod \"bdfe9f06-7468-49c8-b189-9130103092c5\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.191055 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdfe9f06-7468-49c8-b189-9130103092c5-tmp\") pod \"bdfe9f06-7468-49c8-b189-9130103092c5\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.191118 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-proxy-ca-bundles\") pod \"bdfe9f06-7468-49c8-b189-9130103092c5\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.191162 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr4rt\" (UniqueName: \"kubernetes.io/projected/bdfe9f06-7468-49c8-b189-9130103092c5-kube-api-access-sr4rt\") pod \"bdfe9f06-7468-49c8-b189-9130103092c5\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.191262 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-config\") pod \"bdfe9f06-7468-49c8-b189-9130103092c5\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.191324 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-client-ca\") pod \"bdfe9f06-7468-49c8-b189-9130103092c5\" (UID: \"bdfe9f06-7468-49c8-b189-9130103092c5\") " Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.192650 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-client-ca" (OuterVolumeSpecName: "client-ca") pod "bdfe9f06-7468-49c8-b189-9130103092c5" (UID: "bdfe9f06-7468-49c8-b189-9130103092c5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.192716 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bdfe9f06-7468-49c8-b189-9130103092c5" (UID: "bdfe9f06-7468-49c8-b189-9130103092c5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.193618 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdfe9f06-7468-49c8-b189-9130103092c5-tmp" (OuterVolumeSpecName: "tmp") pod "bdfe9f06-7468-49c8-b189-9130103092c5" (UID: "bdfe9f06-7468-49c8-b189-9130103092c5"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.193756 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-config" (OuterVolumeSpecName: "config") pod "bdfe9f06-7468-49c8-b189-9130103092c5" (UID: "bdfe9f06-7468-49c8-b189-9130103092c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.201216 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfe9f06-7468-49c8-b189-9130103092c5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bdfe9f06-7468-49c8-b189-9130103092c5" (UID: "bdfe9f06-7468-49c8-b189-9130103092c5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.202174 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfe9f06-7468-49c8-b189-9130103092c5-kube-api-access-sr4rt" (OuterVolumeSpecName: "kube-api-access-sr4rt") pod "bdfe9f06-7468-49c8-b189-9130103092c5" (UID: "bdfe9f06-7468-49c8-b189-9130103092c5"). InnerVolumeSpecName "kube-api-access-sr4rt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293032 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbd13e0-e9ad-4e92-a8db-bcd632695463-serving-cert\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293076 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-client-ca\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293330 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-config\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293565 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcgrz\" (UniqueName: \"kubernetes.io/projected/dfbd13e0-e9ad-4e92-a8db-bcd632695463-kube-api-access-mcgrz\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293735 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-proxy-ca-bundles\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293767 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfbd13e0-e9ad-4e92-a8db-bcd632695463-tmp\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293890 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/bdfe9f06-7468-49c8-b189-9130103092c5-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293905 5114 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293917 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sr4rt\" (UniqueName: \"kubernetes.io/projected/bdfe9f06-7468-49c8-b189-9130103092c5-kube-api-access-sr4rt\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293928 5114 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293942 5114 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bdfe9f06-7468-49c8-b189-9130103092c5-client-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.293952 5114 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bdfe9f06-7468-49c8-b189-9130103092c5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.395176 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-config\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.395382 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mcgrz\" (UniqueName: \"kubernetes.io/projected/dfbd13e0-e9ad-4e92-a8db-bcd632695463-kube-api-access-mcgrz\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.395457 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-proxy-ca-bundles\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.395978 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfbd13e0-e9ad-4e92-a8db-bcd632695463-tmp\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.396090 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbd13e0-e9ad-4e92-a8db-bcd632695463-serving-cert\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.396150 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-client-ca\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.396922 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-proxy-ca-bundles\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.397122 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-config\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.397367 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/dfbd13e0-e9ad-4e92-a8db-bcd632695463-tmp\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.397626 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dfbd13e0-e9ad-4e92-a8db-bcd632695463-client-ca\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.403604 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfbd13e0-e9ad-4e92-a8db-bcd632695463-serving-cert\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.419445 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcgrz\" (UniqueName: \"kubernetes.io/projected/dfbd13e0-e9ad-4e92-a8db-bcd632695463-kube-api-access-mcgrz\") pod \"controller-manager-5cfbf5d896-djgkx\" (UID: \"dfbd13e0-e9ad-4e92-a8db-bcd632695463\") " pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.489417 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.735043 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5cfbf5d896-djgkx"] Feb 16 00:13:57 crc kubenswrapper[5114]: W0216 00:13:57.746460 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbd13e0_e9ad_4e92_a8db_bcd632695463.slice/crio-d44aae2deb6541ea71983e626490e072846de2e3a685ebb662cd6bb0626cd45b WatchSource:0}: Error finding container d44aae2deb6541ea71983e626490e072846de2e3a685ebb662cd6bb0626cd45b: Status 404 returned error can't find the container with id d44aae2deb6541ea71983e626490e072846de2e3a685ebb662cd6bb0626cd45b Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.980295 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" event={"ID":"dfbd13e0-e9ad-4e92-a8db-bcd632695463","Type":"ContainerStarted","Data":"5bd80b7f0c69b8e97dee609715a5ec475c50839b71c530a4a843ed77621c45a1"} Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.980368 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" event={"ID":"dfbd13e0-e9ad-4e92-a8db-bcd632695463","Type":"ContainerStarted","Data":"d44aae2deb6541ea71983e626490e072846de2e3a685ebb662cd6bb0626cd45b"} Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.980825 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.982392 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" event={"ID":"bdfe9f06-7468-49c8-b189-9130103092c5","Type":"ContainerDied","Data":"b2018280547f64de37c763cd2dde75f314ed2a28c60c9b14937d1d541bb0001f"} Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.982439 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-795f48dcf9-mxxrx" Feb 16 00:13:57 crc kubenswrapper[5114]: I0216 00:13:57.982459 5114 scope.go:117] "RemoveContainer" containerID="c441dba2a86eac74dd38ecc255e9996bc5e2d46f37bcf8c5785b07c0e6e027e4" Feb 16 00:13:58 crc kubenswrapper[5114]: I0216 00:13:58.009909 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" podStartSLOduration=2.009878418 podStartE2EDuration="2.009878418s" podCreationTimestamp="2026-02-16 00:13:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:13:58.006320769 +0000 UTC m=+314.387597597" watchObservedRunningTime="2026-02-16 00:13:58.009878418 +0000 UTC m=+314.391155276" Feb 16 00:13:58 crc kubenswrapper[5114]: I0216 00:13:58.026618 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-795f48dcf9-mxxrx"] Feb 16 00:13:58 crc kubenswrapper[5114]: I0216 00:13:58.033133 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-795f48dcf9-mxxrx"] Feb 16 00:13:58 crc kubenswrapper[5114]: I0216 00:13:58.393544 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5cfbf5d896-djgkx" Feb 16 00:13:59 crc kubenswrapper[5114]: I0216 00:13:59.825912 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfe9f06-7468-49c8-b189-9130103092c5" path="/var/lib/kubelet/pods/bdfe9f06-7468-49c8-b189-9130103092c5/volumes" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.270324 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9w976"] Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.271418 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9w976" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="registry-server" containerID="cri-o://c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e" gracePeriod=30 Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.291815 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llmwl"] Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.292306 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-llmwl" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="registry-server" containerID="cri-o://ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a" gracePeriod=30 Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.298034 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-crpbt"] Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.298380 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" podUID="144852dc-946d-4a33-8453-c3d5bb49127d" containerName="marketplace-operator" containerID="cri-o://58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc" gracePeriod=30 Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.306407 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsm82"] Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.306950 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fsm82" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="registry-server" containerID="cri-o://591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3" gracePeriod=30 Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.316972 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ld7d"] Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.317275 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8ld7d" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="registry-server" containerID="cri-o://015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018" gracePeriod=30 Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.325466 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qqbpj"] Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.330113 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qqbpj"] Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.330276 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.470177 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d23892e-7be3-463c-800d-7cb9ec870736-tmp\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.470283 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zqrf\" (UniqueName: \"kubernetes.io/projected/1d23892e-7be3-463c-800d-7cb9ec870736-kube-api-access-5zqrf\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.470319 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d23892e-7be3-463c-800d-7cb9ec870736-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.470349 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d23892e-7be3-463c-800d-7cb9ec870736-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.572211 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d23892e-7be3-463c-800d-7cb9ec870736-tmp\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.572688 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5zqrf\" (UniqueName: \"kubernetes.io/projected/1d23892e-7be3-463c-800d-7cb9ec870736-kube-api-access-5zqrf\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.572718 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d23892e-7be3-463c-800d-7cb9ec870736-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.572743 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d23892e-7be3-463c-800d-7cb9ec870736-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.574129 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d23892e-7be3-463c-800d-7cb9ec870736-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.574441 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/1d23892e-7be3-463c-800d-7cb9ec870736-tmp\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.586414 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/1d23892e-7be3-463c-800d-7cb9ec870736-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.596125 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zqrf\" (UniqueName: \"kubernetes.io/projected/1d23892e-7be3-463c-800d-7cb9ec870736-kube-api-access-5zqrf\") pod \"marketplace-operator-547dbd544d-qqbpj\" (UID: \"1d23892e-7be3-463c-800d-7cb9ec870736\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.808047 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.815305 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.918026 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.972217 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.983120 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-utilities\") pod \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.983286 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8f2h\" (UniqueName: \"kubernetes.io/projected/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-kube-api-access-w8f2h\") pod \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.983412 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-catalog-content\") pod \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\" (UID: \"35d79a09-4a13-4f64-b2ef-f7061b82f1f9\") " Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.983240 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.985683 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-utilities" (OuterVolumeSpecName: "utilities") pod "35d79a09-4a13-4f64-b2ef-f7061b82f1f9" (UID: "35d79a09-4a13-4f64-b2ef-f7061b82f1f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.990712 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:14:09 crc kubenswrapper[5114]: I0216 00:14:09.995474 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-kube-api-access-w8f2h" (OuterVolumeSpecName: "kube-api-access-w8f2h") pod "35d79a09-4a13-4f64-b2ef-f7061b82f1f9" (UID: "35d79a09-4a13-4f64-b2ef-f7061b82f1f9"). InnerVolumeSpecName "kube-api-access-w8f2h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.046085 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35d79a09-4a13-4f64-b2ef-f7061b82f1f9" (UID: "35d79a09-4a13-4f64-b2ef-f7061b82f1f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.073884 5114 generic.go:358] "Generic (PLEG): container finished" podID="144852dc-946d-4a33-8453-c3d5bb49127d" containerID="58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc" exitCode=0 Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.074063 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" event={"ID":"144852dc-946d-4a33-8453-c3d5bb49127d","Type":"ContainerDied","Data":"58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.074112 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" event={"ID":"144852dc-946d-4a33-8453-c3d5bb49127d","Type":"ContainerDied","Data":"b0944797a88d66ed8cc4e6135707089e458e18981ae551bcf583fb87e6aabb3c"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.074138 5114 scope.go:117] "RemoveContainer" containerID="58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.074384 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-crpbt" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.078794 5114 generic.go:358] "Generic (PLEG): container finished" podID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerID="c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e" exitCode=0 Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.078913 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w976" event={"ID":"35d79a09-4a13-4f64-b2ef-f7061b82f1f9","Type":"ContainerDied","Data":"c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.078955 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9w976" event={"ID":"35d79a09-4a13-4f64-b2ef-f7061b82f1f9","Type":"ContainerDied","Data":"7e93191f9d6f8833a097f9d20745fdda23848b0bef8896105ba0e82d9fa736d2"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.079109 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9w976" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.082259 5114 generic.go:358] "Generic (PLEG): container finished" podID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerID="591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3" exitCode=0 Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.082426 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsm82" event={"ID":"5ffe7c6f-6349-415c-9729-182b0cc43e93","Type":"ContainerDied","Data":"591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.082467 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fsm82" event={"ID":"5ffe7c6f-6349-415c-9729-182b0cc43e93","Type":"ContainerDied","Data":"f3d643ec5655fa16d4b95c51ad5e4e51cb9e3ba8a4b7dafe36685a3e0001c425"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.082572 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fsm82" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.086745 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-utilities\") pod \"5ffe7c6f-6349-415c-9729-182b0cc43e93\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.086874 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-catalog-content\") pod \"5ffe7c6f-6349-415c-9729-182b0cc43e93\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.086915 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-utilities\") pod \"a392cbd8-29d4-4a9f-a413-40249fe74474\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.087051 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwz5w\" (UniqueName: \"kubernetes.io/projected/a392cbd8-29d4-4a9f-a413-40249fe74474-kube-api-access-jwz5w\") pod \"a392cbd8-29d4-4a9f-a413-40249fe74474\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.087102 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-trusted-ca\") pod \"144852dc-946d-4a33-8453-c3d5bb49127d\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.088103 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "144852dc-946d-4a33-8453-c3d5bb49127d" (UID: "144852dc-946d-4a33-8453-c3d5bb49127d"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.088488 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-utilities" (OuterVolumeSpecName: "utilities") pod "5ffe7c6f-6349-415c-9729-182b0cc43e93" (UID: "5ffe7c6f-6349-415c-9729-182b0cc43e93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.087476 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jdc4\" (UniqueName: \"kubernetes.io/projected/d846f09e-4870-4305-857c-b47bbe247686-kube-api-access-4jdc4\") pod \"d846f09e-4870-4305-857c-b47bbe247686\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.088978 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zv5hn\" (UniqueName: \"kubernetes.io/projected/144852dc-946d-4a33-8453-c3d5bb49127d-kube-api-access-zv5hn\") pod \"144852dc-946d-4a33-8453-c3d5bb49127d\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.089050 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-catalog-content\") pod \"a392cbd8-29d4-4a9f-a413-40249fe74474\" (UID: \"a392cbd8-29d4-4a9f-a413-40249fe74474\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.089086 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/144852dc-946d-4a33-8453-c3d5bb49127d-tmp\") pod \"144852dc-946d-4a33-8453-c3d5bb49127d\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.089157 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-operator-metrics\") pod \"144852dc-946d-4a33-8453-c3d5bb49127d\" (UID: \"144852dc-946d-4a33-8453-c3d5bb49127d\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.089197 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxlls\" (UniqueName: \"kubernetes.io/projected/5ffe7c6f-6349-415c-9729-182b0cc43e93-kube-api-access-dxlls\") pod \"5ffe7c6f-6349-415c-9729-182b0cc43e93\" (UID: \"5ffe7c6f-6349-415c-9729-182b0cc43e93\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.089267 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-utilities\") pod \"d846f09e-4870-4305-857c-b47bbe247686\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.089317 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-catalog-content\") pod \"d846f09e-4870-4305-857c-b47bbe247686\" (UID: \"d846f09e-4870-4305-857c-b47bbe247686\") " Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.088929 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-utilities" (OuterVolumeSpecName: "utilities") pod "a392cbd8-29d4-4a9f-a413-40249fe74474" (UID: "a392cbd8-29d4-4a9f-a413-40249fe74474"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.089648 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144852dc-946d-4a33-8453-c3d5bb49127d-tmp" (OuterVolumeSpecName: "tmp") pod "144852dc-946d-4a33-8453-c3d5bb49127d" (UID: "144852dc-946d-4a33-8453-c3d5bb49127d"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.090940 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a392cbd8-29d4-4a9f-a413-40249fe74474-kube-api-access-jwz5w" (OuterVolumeSpecName: "kube-api-access-jwz5w") pod "a392cbd8-29d4-4a9f-a413-40249fe74474" (UID: "a392cbd8-29d4-4a9f-a413-40249fe74474"). InnerVolumeSpecName "kube-api-access-jwz5w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.092751 5114 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/144852dc-946d-4a33-8453-c3d5bb49127d-tmp\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.092783 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.092799 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.092876 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.092893 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.092912 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jwz5w\" (UniqueName: \"kubernetes.io/projected/a392cbd8-29d4-4a9f-a413-40249fe74474-kube-api-access-jwz5w\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.092925 5114 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.092939 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w8f2h\" (UniqueName: \"kubernetes.io/projected/35d79a09-4a13-4f64-b2ef-f7061b82f1f9-kube-api-access-w8f2h\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.093334 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-llmwl" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.093147 5114 generic.go:358] "Generic (PLEG): container finished" podID="d846f09e-4870-4305-857c-b47bbe247686" containerID="ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a" exitCode=0 Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.093994 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmwl" event={"ID":"d846f09e-4870-4305-857c-b47bbe247686","Type":"ContainerDied","Data":"ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.094049 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-llmwl" event={"ID":"d846f09e-4870-4305-857c-b47bbe247686","Type":"ContainerDied","Data":"140f8ba1f8fbef4aaba6d1dbbcd0e746a4eeaa7fe7a598e72f5681fd1e263a1c"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.094695 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ffe7c6f-6349-415c-9729-182b0cc43e93-kube-api-access-dxlls" (OuterVolumeSpecName: "kube-api-access-dxlls") pod "5ffe7c6f-6349-415c-9729-182b0cc43e93" (UID: "5ffe7c6f-6349-415c-9729-182b0cc43e93"). InnerVolumeSpecName "kube-api-access-dxlls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.094569 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d846f09e-4870-4305-857c-b47bbe247686-kube-api-access-4jdc4" (OuterVolumeSpecName: "kube-api-access-4jdc4") pod "d846f09e-4870-4305-857c-b47bbe247686" (UID: "d846f09e-4870-4305-857c-b47bbe247686"). InnerVolumeSpecName "kube-api-access-4jdc4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.099617 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-utilities" (OuterVolumeSpecName: "utilities") pod "d846f09e-4870-4305-857c-b47bbe247686" (UID: "d846f09e-4870-4305-857c-b47bbe247686"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.100546 5114 scope.go:117] "RemoveContainer" containerID="58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.100563 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "144852dc-946d-4a33-8453-c3d5bb49127d" (UID: "144852dc-946d-4a33-8453-c3d5bb49127d"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.102422 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc\": container with ID starting with 58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc not found: ID does not exist" containerID="58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.102462 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc"} err="failed to get container status \"58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc\": rpc error: code = NotFound desc = could not find container \"58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc\": container with ID starting with 58ae2680206f22cd4975eb77a16633c000543efcf3fb8b975256ee294fe622fc not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.102488 5114 scope.go:117] "RemoveContainer" containerID="c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.105095 5114 generic.go:358] "Generic (PLEG): container finished" podID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerID="015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018" exitCode=0 Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.105221 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ld7d" event={"ID":"a392cbd8-29d4-4a9f-a413-40249fe74474","Type":"ContainerDied","Data":"015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.105277 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8ld7d" event={"ID":"a392cbd8-29d4-4a9f-a413-40249fe74474","Type":"ContainerDied","Data":"2e12c279e46449e23b129f41f97eb3f6ce80c49eea6690d7f081c3be9c73e047"} Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.105402 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8ld7d" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.110271 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144852dc-946d-4a33-8453-c3d5bb49127d-kube-api-access-zv5hn" (OuterVolumeSpecName: "kube-api-access-zv5hn") pod "144852dc-946d-4a33-8453-c3d5bb49127d" (UID: "144852dc-946d-4a33-8453-c3d5bb49127d"). InnerVolumeSpecName "kube-api-access-zv5hn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.115021 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ffe7c6f-6349-415c-9729-182b0cc43e93" (UID: "5ffe7c6f-6349-415c-9729-182b0cc43e93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.128306 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9w976"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.131754 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9w976"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.135979 5114 scope.go:117] "RemoveContainer" containerID="2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.151080 5114 scope.go:117] "RemoveContainer" containerID="73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.166636 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d846f09e-4870-4305-857c-b47bbe247686" (UID: "d846f09e-4870-4305-857c-b47bbe247686"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.167231 5114 scope.go:117] "RemoveContainer" containerID="c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.167864 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e\": container with ID starting with c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e not found: ID does not exist" containerID="c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.167914 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e"} err="failed to get container status \"c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e\": rpc error: code = NotFound desc = could not find container \"c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e\": container with ID starting with c89fa9c15cd90df99d79cba9b4d23151c76163c605949f3e4fcea9c2e895fe0e not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.167951 5114 scope.go:117] "RemoveContainer" containerID="2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.168383 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c\": container with ID starting with 2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c not found: ID does not exist" containerID="2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.168419 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c"} err="failed to get container status \"2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c\": rpc error: code = NotFound desc = could not find container \"2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c\": container with ID starting with 2729aa206a695e722cf30ebd5481b911abad643735bb8ddc3b619051ccf9d62c not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.168435 5114 scope.go:117] "RemoveContainer" containerID="73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.168748 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589\": container with ID starting with 73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589 not found: ID does not exist" containerID="73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.168829 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589"} err="failed to get container status \"73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589\": rpc error: code = NotFound desc = could not find container \"73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589\": container with ID starting with 73d63e7d599a3ce7f9c1eb081fd6a14babb5594926fa356516e521598f474589 not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.168891 5114 scope.go:117] "RemoveContainer" containerID="591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.182938 5114 scope.go:117] "RemoveContainer" containerID="2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.194205 5114 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/144852dc-946d-4a33-8453-c3d5bb49127d-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.194235 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dxlls\" (UniqueName: \"kubernetes.io/projected/5ffe7c6f-6349-415c-9729-182b0cc43e93-kube-api-access-dxlls\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.194266 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.194280 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d846f09e-4870-4305-857c-b47bbe247686-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.194290 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ffe7c6f-6349-415c-9729-182b0cc43e93-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.194300 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4jdc4\" (UniqueName: \"kubernetes.io/projected/d846f09e-4870-4305-857c-b47bbe247686-kube-api-access-4jdc4\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.194309 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zv5hn\" (UniqueName: \"kubernetes.io/projected/144852dc-946d-4a33-8453-c3d5bb49127d-kube-api-access-zv5hn\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.197502 5114 scope.go:117] "RemoveContainer" containerID="c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.210360 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a392cbd8-29d4-4a9f-a413-40249fe74474" (UID: "a392cbd8-29d4-4a9f-a413-40249fe74474"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.213779 5114 scope.go:117] "RemoveContainer" containerID="591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.214444 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3\": container with ID starting with 591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3 not found: ID does not exist" containerID="591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.214493 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3"} err="failed to get container status \"591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3\": rpc error: code = NotFound desc = could not find container \"591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3\": container with ID starting with 591eb6462404cfc8d1e4f42d9096c77b3c193af8974f9a199721028f69b24af3 not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.214541 5114 scope.go:117] "RemoveContainer" containerID="2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.214987 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a\": container with ID starting with 2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a not found: ID does not exist" containerID="2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.215036 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a"} err="failed to get container status \"2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a\": rpc error: code = NotFound desc = could not find container \"2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a\": container with ID starting with 2536abdda072b245362ee3732d9c92520e4b3b490cbbbb3fde7bcb3e05f7007a not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.215075 5114 scope.go:117] "RemoveContainer" containerID="c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.215410 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d\": container with ID starting with c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d not found: ID does not exist" containerID="c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.215448 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d"} err="failed to get container status \"c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d\": rpc error: code = NotFound desc = could not find container \"c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d\": container with ID starting with c00c82f64d579984744164821b2eb0e082a7a114c3298a5da5185fcc51a4e67d not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.215466 5114 scope.go:117] "RemoveContainer" containerID="ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.241565 5114 scope.go:117] "RemoveContainer" containerID="745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.267589 5114 scope.go:117] "RemoveContainer" containerID="81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f" Feb 16 00:14:10 crc kubenswrapper[5114]: W0216 00:14:10.283191 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d23892e_7be3_463c_800d_7cb9ec870736.slice/crio-0c1c465b71569c463b969b3a2fe76a9200714ddb0cc06d769c2a98b7b6ce073c WatchSource:0}: Error finding container 0c1c465b71569c463b969b3a2fe76a9200714ddb0cc06d769c2a98b7b6ce073c: Status 404 returned error can't find the container with id 0c1c465b71569c463b969b3a2fe76a9200714ddb0cc06d769c2a98b7b6ce073c Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.283449 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-qqbpj"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.295693 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a392cbd8-29d4-4a9f-a413-40249fe74474-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.299369 5114 scope.go:117] "RemoveContainer" containerID="ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.299952 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a\": container with ID starting with ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a not found: ID does not exist" containerID="ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.300005 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a"} err="failed to get container status \"ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a\": rpc error: code = NotFound desc = could not find container \"ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a\": container with ID starting with ff3c646b74b98a1a249bdb3f049164dee9be46d8a1c0802d9f9735201a79109a not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.300043 5114 scope.go:117] "RemoveContainer" containerID="745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.300728 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d\": container with ID starting with 745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d not found: ID does not exist" containerID="745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.300798 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d"} err="failed to get container status \"745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d\": rpc error: code = NotFound desc = could not find container \"745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d\": container with ID starting with 745788a6dfdbe76df0e9762f536c999b3534ef556e11b574e3bff1dc8d93fb2d not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.300842 5114 scope.go:117] "RemoveContainer" containerID="81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.301523 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f\": container with ID starting with 81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f not found: ID does not exist" containerID="81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.301566 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f"} err="failed to get container status \"81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f\": rpc error: code = NotFound desc = could not find container \"81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f\": container with ID starting with 81c1c919f7e348e03943c4cde51c6a2cc8bb58e6c85bf96b4fac1c165e35c76f not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.301596 5114 scope.go:117] "RemoveContainer" containerID="015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.324929 5114 scope.go:117] "RemoveContainer" containerID="fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.348115 5114 scope.go:117] "RemoveContainer" containerID="0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.366652 5114 scope.go:117] "RemoveContainer" containerID="015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.368274 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018\": container with ID starting with 015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018 not found: ID does not exist" containerID="015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.368361 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018"} err="failed to get container status \"015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018\": rpc error: code = NotFound desc = could not find container \"015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018\": container with ID starting with 015f8500963a6453812789741bfe90e5bff722c917321e6b55df71c5dc405018 not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.368401 5114 scope.go:117] "RemoveContainer" containerID="fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.369176 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a\": container with ID starting with fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a not found: ID does not exist" containerID="fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.369207 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a"} err="failed to get container status \"fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a\": rpc error: code = NotFound desc = could not find container \"fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a\": container with ID starting with fbe546c19f719c1d5ddadc1006cae1bab4ae53971ba8fb1f2f0f1e6fe0db754a not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.369225 5114 scope.go:117] "RemoveContainer" containerID="0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9" Feb 16 00:14:10 crc kubenswrapper[5114]: E0216 00:14:10.369609 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9\": container with ID starting with 0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9 not found: ID does not exist" containerID="0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.369660 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9"} err="failed to get container status \"0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9\": rpc error: code = NotFound desc = could not find container \"0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9\": container with ID starting with 0c80fd94df2334f2bf6ae958f46d72bdaa33a67aeb7f7879c72816e78611eee9 not found: ID does not exist" Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.482305 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8ld7d"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.487404 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8ld7d"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.494756 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsm82"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.526418 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fsm82"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.542774 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-llmwl"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.551522 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-llmwl"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.557508 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-crpbt"] Feb 16 00:14:10 crc kubenswrapper[5114]: I0216 00:14:10.561384 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-crpbt"] Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.114427 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" event={"ID":"1d23892e-7be3-463c-800d-7cb9ec870736","Type":"ContainerStarted","Data":"e1adfb9ed1f201c35de6df97d72f05295853cfd58cbeb747f2767d5ff57f7b22"} Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.114786 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" event={"ID":"1d23892e-7be3-463c-800d-7cb9ec870736","Type":"ContainerStarted","Data":"0c1c465b71569c463b969b3a2fe76a9200714ddb0cc06d769c2a98b7b6ce073c"} Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.114808 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.119814 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.142916 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-qqbpj" podStartSLOduration=2.142887865 podStartE2EDuration="2.142887865s" podCreationTimestamp="2026-02-16 00:14:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:14:11.142848863 +0000 UTC m=+327.524125701" watchObservedRunningTime="2026-02-16 00:14:11.142887865 +0000 UTC m=+327.524164693" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.471850 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vmc5k"] Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472391 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="144852dc-946d-4a33-8453-c3d5bb49127d" containerName="marketplace-operator" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472405 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="144852dc-946d-4a33-8453-c3d5bb49127d" containerName="marketplace-operator" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472416 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472422 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472430 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="extract-utilities" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472436 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="extract-utilities" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472445 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472449 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472460 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="extract-utilities" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472466 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="extract-utilities" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472478 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="extract-content" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472483 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="extract-content" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472489 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="extract-content" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472494 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="extract-content" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472500 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="extract-content" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472504 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="extract-content" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472512 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="extract-content" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472517 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="extract-content" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472524 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472531 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472540 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="extract-utilities" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472545 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="extract-utilities" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472552 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472557 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472567 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="extract-utilities" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472573 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="extract-utilities" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472669 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472677 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="144852dc-946d-4a33-8453-c3d5bb49127d" containerName="marketplace-operator" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472689 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472696 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="d846f09e-4870-4305-857c-b47bbe247686" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.472703 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" containerName="registry-server" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.480680 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.482859 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmc5k"] Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.484086 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.615075 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgtl\" (UniqueName: \"kubernetes.io/projected/b86a55a5-c20f-46a3-9dce-e756830b00dc-kube-api-access-kcgtl\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.615388 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-catalog-content\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.615472 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-utilities\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.675036 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-858qd"] Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.682455 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.684748 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.684826 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-858qd"] Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.717349 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-catalog-content\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.717509 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-utilities\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.717594 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgtl\" (UniqueName: \"kubernetes.io/projected/b86a55a5-c20f-46a3-9dce-e756830b00dc-kube-api-access-kcgtl\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.717845 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-catalog-content\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.717980 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-utilities\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.745449 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgtl\" (UniqueName: \"kubernetes.io/projected/b86a55a5-c20f-46a3-9dce-e756830b00dc-kube-api-access-kcgtl\") pod \"redhat-marketplace-vmc5k\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.802017 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.818910 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f57fd8-c18c-4747-9e05-c9061a12908e-utilities\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.818995 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgwdn\" (UniqueName: \"kubernetes.io/projected/c5f57fd8-c18c-4747-9e05-c9061a12908e-kube-api-access-rgwdn\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.819106 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f57fd8-c18c-4747-9e05-c9061a12908e-catalog-content\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.822213 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144852dc-946d-4a33-8453-c3d5bb49127d" path="/var/lib/kubelet/pods/144852dc-946d-4a33-8453-c3d5bb49127d/volumes" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.822818 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35d79a09-4a13-4f64-b2ef-f7061b82f1f9" path="/var/lib/kubelet/pods/35d79a09-4a13-4f64-b2ef-f7061b82f1f9/volumes" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.823570 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ffe7c6f-6349-415c-9729-182b0cc43e93" path="/var/lib/kubelet/pods/5ffe7c6f-6349-415c-9729-182b0cc43e93/volumes" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.824898 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a392cbd8-29d4-4a9f-a413-40249fe74474" path="/var/lib/kubelet/pods/a392cbd8-29d4-4a9f-a413-40249fe74474/volumes" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.825658 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d846f09e-4870-4305-857c-b47bbe247686" path="/var/lib/kubelet/pods/d846f09e-4870-4305-857c-b47bbe247686/volumes" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.922897 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f57fd8-c18c-4747-9e05-c9061a12908e-catalog-content\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.923270 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f57fd8-c18c-4747-9e05-c9061a12908e-utilities\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.923320 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rgwdn\" (UniqueName: \"kubernetes.io/projected/c5f57fd8-c18c-4747-9e05-c9061a12908e-kube-api-access-rgwdn\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.923536 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c5f57fd8-c18c-4747-9e05-c9061a12908e-catalog-content\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.924394 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c5f57fd8-c18c-4747-9e05-c9061a12908e-utilities\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:11 crc kubenswrapper[5114]: I0216 00:14:11.947761 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgwdn\" (UniqueName: \"kubernetes.io/projected/c5f57fd8-c18c-4747-9e05-c9061a12908e-kube-api-access-rgwdn\") pod \"redhat-operators-858qd\" (UID: \"c5f57fd8-c18c-4747-9e05-c9061a12908e\") " pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:12 crc kubenswrapper[5114]: I0216 00:14:12.061525 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:12 crc kubenswrapper[5114]: I0216 00:14:12.267380 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmc5k"] Feb 16 00:14:12 crc kubenswrapper[5114]: I0216 00:14:12.458366 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-858qd"] Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.139379 5114 generic.go:358] "Generic (PLEG): container finished" podID="c5f57fd8-c18c-4747-9e05-c9061a12908e" containerID="dbb4e518b687d727825b7dd32b2cee0fd8e461ae29ae486ef4b7e4ad45625755" exitCode=0 Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.139464 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858qd" event={"ID":"c5f57fd8-c18c-4747-9e05-c9061a12908e","Type":"ContainerDied","Data":"dbb4e518b687d727825b7dd32b2cee0fd8e461ae29ae486ef4b7e4ad45625755"} Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.139549 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858qd" event={"ID":"c5f57fd8-c18c-4747-9e05-c9061a12908e","Type":"ContainerStarted","Data":"d451243fecf13d515cff85d2baceb4b270c24ea81fbbf823d3ab6d10a207b30c"} Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.144366 5114 generic.go:358] "Generic (PLEG): container finished" podID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerID="d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3" exitCode=0 Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.144499 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmc5k" event={"ID":"b86a55a5-c20f-46a3-9dce-e756830b00dc","Type":"ContainerDied","Data":"d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3"} Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.144588 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmc5k" event={"ID":"b86a55a5-c20f-46a3-9dce-e756830b00dc","Type":"ContainerStarted","Data":"d09db1c78956ec1af7cab1b07d0f420046de4b42564ced76a4aae1e7b6488526"} Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.699694 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rld8n"] Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.703929 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.746649 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rld8n"] Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.847713 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e58833-417c-4f87-9ac4-3ac98036a310-trusted-ca\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.847885 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-registry-tls\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.847962 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kllfl\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-kube-api-access-kllfl\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.848049 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/53e58833-417c-4f87-9ac4-3ac98036a310-registry-certificates\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.848213 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.848238 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/53e58833-417c-4f87-9ac4-3ac98036a310-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.848359 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/53e58833-417c-4f87-9ac4-3ac98036a310-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.848587 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.883257 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x55hq"] Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.887771 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.898046 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.905098 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.905506 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x55hq"] Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.952893 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.952967 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e58833-417c-4f87-9ac4-3ac98036a310-trusted-ca\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.953036 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-registry-tls\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.953067 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kllfl\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-kube-api-access-kllfl\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.953110 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/53e58833-417c-4f87-9ac4-3ac98036a310-registry-certificates\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.953185 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/53e58833-417c-4f87-9ac4-3ac98036a310-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.953210 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/53e58833-417c-4f87-9ac4-3ac98036a310-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.953953 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/53e58833-417c-4f87-9ac4-3ac98036a310-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.954996 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/53e58833-417c-4f87-9ac4-3ac98036a310-trusted-ca\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.955161 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/53e58833-417c-4f87-9ac4-3ac98036a310-registry-certificates\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.977999 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/53e58833-417c-4f87-9ac4-3ac98036a310-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.978298 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kllfl\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-kube-api-access-kllfl\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.979298 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-bound-sa-token\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:13 crc kubenswrapper[5114]: I0216 00:14:13.979676 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/53e58833-417c-4f87-9ac4-3ac98036a310-registry-tls\") pod \"image-registry-5d9d95bf5b-rld8n\" (UID: \"53e58833-417c-4f87-9ac4-3ac98036a310\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.040221 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.054539 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-catalog-content\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.054700 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srgh4\" (UniqueName: \"kubernetes.io/projected/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-kube-api-access-srgh4\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.054801 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-utilities\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.076019 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sm2s6"] Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.088158 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.088304 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sm2s6"] Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.090434 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.155843 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-catalog-content\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.155909 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-srgh4\" (UniqueName: \"kubernetes.io/projected/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-kube-api-access-srgh4\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.155951 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-utilities\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.156693 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-utilities\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.157277 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-catalog-content\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.157492 5114 generic.go:358] "Generic (PLEG): container finished" podID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerID="42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49" exitCode=0 Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.157617 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmc5k" event={"ID":"b86a55a5-c20f-46a3-9dce-e756830b00dc","Type":"ContainerDied","Data":"42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49"} Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.175107 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858qd" event={"ID":"c5f57fd8-c18c-4747-9e05-c9061a12908e","Type":"ContainerStarted","Data":"19ed6c68bde0e24b1549153eb2584af8f2a6105a3904e740a5b5977499ca8417"} Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.183554 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-srgh4\" (UniqueName: \"kubernetes.io/projected/e9a8a33b-86f6-46d3-9efb-f4395a0a9830-kube-api-access-srgh4\") pod \"certified-operators-x55hq\" (UID: \"e9a8a33b-86f6-46d3-9efb-f4395a0a9830\") " pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.226837 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.257142 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgx5p\" (UniqueName: \"kubernetes.io/projected/e23f1349-18bf-40ca-8419-c94cbe0665a3-kube-api-access-hgx5p\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.257219 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e23f1349-18bf-40ca-8419-c94cbe0665a3-catalog-content\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.257338 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e23f1349-18bf-40ca-8419-c94cbe0665a3-utilities\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.359337 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e23f1349-18bf-40ca-8419-c94cbe0665a3-catalog-content\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.359487 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e23f1349-18bf-40ca-8419-c94cbe0665a3-utilities\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.359524 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hgx5p\" (UniqueName: \"kubernetes.io/projected/e23f1349-18bf-40ca-8419-c94cbe0665a3-kube-api-access-hgx5p\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.360127 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e23f1349-18bf-40ca-8419-c94cbe0665a3-catalog-content\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.360706 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e23f1349-18bf-40ca-8419-c94cbe0665a3-utilities\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.379544 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgx5p\" (UniqueName: \"kubernetes.io/projected/e23f1349-18bf-40ca-8419-c94cbe0665a3-kube-api-access-hgx5p\") pod \"community-operators-sm2s6\" (UID: \"e23f1349-18bf-40ca-8419-c94cbe0665a3\") " pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.415514 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.478673 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-rld8n"] Feb 16 00:14:14 crc kubenswrapper[5114]: W0216 00:14:14.495532 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53e58833_417c_4f87_9ac4_3ac98036a310.slice/crio-12068d314dfe8818f250f668fb83ac90134c3095fd3ec7a8f791a43e34856d94 WatchSource:0}: Error finding container 12068d314dfe8818f250f668fb83ac90134c3095fd3ec7a8f791a43e34856d94: Status 404 returned error can't find the container with id 12068d314dfe8818f250f668fb83ac90134c3095fd3ec7a8f791a43e34856d94 Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.672516 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x55hq"] Feb 16 00:14:14 crc kubenswrapper[5114]: W0216 00:14:14.683179 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9a8a33b_86f6_46d3_9efb_f4395a0a9830.slice/crio-d04a7004552c63894e00c73f713dd75786afaacb8a9691a256e6a6d342ac95cc WatchSource:0}: Error finding container d04a7004552c63894e00c73f713dd75786afaacb8a9691a256e6a6d342ac95cc: Status 404 returned error can't find the container with id d04a7004552c63894e00c73f713dd75786afaacb8a9691a256e6a6d342ac95cc Feb 16 00:14:14 crc kubenswrapper[5114]: I0216 00:14:14.848464 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sm2s6"] Feb 16 00:14:14 crc kubenswrapper[5114]: W0216 00:14:14.855521 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode23f1349_18bf_40ca_8419_c94cbe0665a3.slice/crio-c0ee887305cf48cac5dfd7b2cbc6c8cfedff5b45cfd1eb03fa68b3bb4117853f WatchSource:0}: Error finding container c0ee887305cf48cac5dfd7b2cbc6c8cfedff5b45cfd1eb03fa68b3bb4117853f: Status 404 returned error can't find the container with id c0ee887305cf48cac5dfd7b2cbc6c8cfedff5b45cfd1eb03fa68b3bb4117853f Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.185583 5114 generic.go:358] "Generic (PLEG): container finished" podID="e23f1349-18bf-40ca-8419-c94cbe0665a3" containerID="0733a114e824bd20d166c1c53878dab88d1bfd95b7171ca1bbaa2814d0c9981f" exitCode=0 Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.185650 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sm2s6" event={"ID":"e23f1349-18bf-40ca-8419-c94cbe0665a3","Type":"ContainerDied","Data":"0733a114e824bd20d166c1c53878dab88d1bfd95b7171ca1bbaa2814d0c9981f"} Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.185731 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sm2s6" event={"ID":"e23f1349-18bf-40ca-8419-c94cbe0665a3","Type":"ContainerStarted","Data":"c0ee887305cf48cac5dfd7b2cbc6c8cfedff5b45cfd1eb03fa68b3bb4117853f"} Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.192475 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" event={"ID":"53e58833-417c-4f87-9ac4-3ac98036a310","Type":"ContainerStarted","Data":"efe645200c56780b910372f2ab362c6bc27ad93d829ef84c44f77d23718c41ec"} Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.192557 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" event={"ID":"53e58833-417c-4f87-9ac4-3ac98036a310","Type":"ContainerStarted","Data":"12068d314dfe8818f250f668fb83ac90134c3095fd3ec7a8f791a43e34856d94"} Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.192813 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.196846 5114 generic.go:358] "Generic (PLEG): container finished" podID="e9a8a33b-86f6-46d3-9efb-f4395a0a9830" containerID="11126b9503f7b0089012648178df4c5bc4ef31c26be2ec2ef4674b6f58bfea73" exitCode=0 Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.197072 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x55hq" event={"ID":"e9a8a33b-86f6-46d3-9efb-f4395a0a9830","Type":"ContainerDied","Data":"11126b9503f7b0089012648178df4c5bc4ef31c26be2ec2ef4674b6f58bfea73"} Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.197156 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x55hq" event={"ID":"e9a8a33b-86f6-46d3-9efb-f4395a0a9830","Type":"ContainerStarted","Data":"d04a7004552c63894e00c73f713dd75786afaacb8a9691a256e6a6d342ac95cc"} Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.200277 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmc5k" event={"ID":"b86a55a5-c20f-46a3-9dce-e756830b00dc","Type":"ContainerStarted","Data":"05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6"} Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.212732 5114 generic.go:358] "Generic (PLEG): container finished" podID="c5f57fd8-c18c-4747-9e05-c9061a12908e" containerID="19ed6c68bde0e24b1549153eb2584af8f2a6105a3904e740a5b5977499ca8417" exitCode=0 Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.212848 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858qd" event={"ID":"c5f57fd8-c18c-4747-9e05-c9061a12908e","Type":"ContainerDied","Data":"19ed6c68bde0e24b1549153eb2584af8f2a6105a3904e740a5b5977499ca8417"} Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.259108 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" podStartSLOduration=2.259092759 podStartE2EDuration="2.259092759s" podCreationTimestamp="2026-02-16 00:14:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:14:15.256733992 +0000 UTC m=+331.638010810" watchObservedRunningTime="2026-02-16 00:14:15.259092759 +0000 UTC m=+331.640369577" Feb 16 00:14:15 crc kubenswrapper[5114]: I0216 00:14:15.284789 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vmc5k" podStartSLOduration=3.6809256599999998 podStartE2EDuration="4.284765507s" podCreationTimestamp="2026-02-16 00:14:11 +0000 UTC" firstStartedPulling="2026-02-16 00:14:13.146566715 +0000 UTC m=+329.527843533" lastFinishedPulling="2026-02-16 00:14:13.750406572 +0000 UTC m=+330.131683380" observedRunningTime="2026-02-16 00:14:15.277916435 +0000 UTC m=+331.659193253" watchObservedRunningTime="2026-02-16 00:14:15.284765507 +0000 UTC m=+331.666042325" Feb 16 00:14:16 crc kubenswrapper[5114]: I0216 00:14:16.221786 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-858qd" event={"ID":"c5f57fd8-c18c-4747-9e05-c9061a12908e","Type":"ContainerStarted","Data":"2b7017bff454c4d80a3b9482ae0f8296906bf6084b62ebb2b1dc4dc86fbe4635"} Feb 16 00:14:16 crc kubenswrapper[5114]: I0216 00:14:16.225681 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sm2s6" event={"ID":"e23f1349-18bf-40ca-8419-c94cbe0665a3","Type":"ContainerStarted","Data":"2b56541d2e14c4c8b95ce2bb7dd118fd2cb92aadf10dff83f2f0daf440215553"} Feb 16 00:14:16 crc kubenswrapper[5114]: I0216 00:14:16.227557 5114 generic.go:358] "Generic (PLEG): container finished" podID="e9a8a33b-86f6-46d3-9efb-f4395a0a9830" containerID="981e5525aa8ace2afecbe53df1206cd297adfa1b23d3f82139120c8f2de5599f" exitCode=0 Feb 16 00:14:16 crc kubenswrapper[5114]: I0216 00:14:16.227968 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x55hq" event={"ID":"e9a8a33b-86f6-46d3-9efb-f4395a0a9830","Type":"ContainerDied","Data":"981e5525aa8ace2afecbe53df1206cd297adfa1b23d3f82139120c8f2de5599f"} Feb 16 00:14:16 crc kubenswrapper[5114]: I0216 00:14:16.244143 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-858qd" podStartSLOduration=4.684968736 podStartE2EDuration="5.244117272s" podCreationTimestamp="2026-02-16 00:14:11 +0000 UTC" firstStartedPulling="2026-02-16 00:14:13.140927197 +0000 UTC m=+329.522204015" lastFinishedPulling="2026-02-16 00:14:13.700075733 +0000 UTC m=+330.081352551" observedRunningTime="2026-02-16 00:14:16.239118622 +0000 UTC m=+332.620395440" watchObservedRunningTime="2026-02-16 00:14:16.244117272 +0000 UTC m=+332.625394090" Feb 16 00:14:17 crc kubenswrapper[5114]: I0216 00:14:17.257923 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x55hq" event={"ID":"e9a8a33b-86f6-46d3-9efb-f4395a0a9830","Type":"ContainerStarted","Data":"de4e5acadfa541df6312274fb436e236c9ab5a41bf712b4017e29dfd2ea1a261"} Feb 16 00:14:17 crc kubenswrapper[5114]: I0216 00:14:17.260702 5114 generic.go:358] "Generic (PLEG): container finished" podID="e23f1349-18bf-40ca-8419-c94cbe0665a3" containerID="2b56541d2e14c4c8b95ce2bb7dd118fd2cb92aadf10dff83f2f0daf440215553" exitCode=0 Feb 16 00:14:17 crc kubenswrapper[5114]: I0216 00:14:17.260867 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sm2s6" event={"ID":"e23f1349-18bf-40ca-8419-c94cbe0665a3","Type":"ContainerDied","Data":"2b56541d2e14c4c8b95ce2bb7dd118fd2cb92aadf10dff83f2f0daf440215553"} Feb 16 00:14:17 crc kubenswrapper[5114]: I0216 00:14:17.279146 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x55hq" podStartSLOduration=3.638659142 podStartE2EDuration="4.279123773s" podCreationTimestamp="2026-02-16 00:14:13 +0000 UTC" firstStartedPulling="2026-02-16 00:14:15.197985819 +0000 UTC m=+331.579262647" lastFinishedPulling="2026-02-16 00:14:15.83845046 +0000 UTC m=+332.219727278" observedRunningTime="2026-02-16 00:14:17.276207671 +0000 UTC m=+333.657484519" watchObservedRunningTime="2026-02-16 00:14:17.279123773 +0000 UTC m=+333.660400601" Feb 16 00:14:18 crc kubenswrapper[5114]: I0216 00:14:18.281645 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sm2s6" event={"ID":"e23f1349-18bf-40ca-8419-c94cbe0665a3","Type":"ContainerStarted","Data":"69d3b23a0c1df520e44f480be1f18b9e298cf55025ce8133c671bfc752ff8b53"} Feb 16 00:14:18 crc kubenswrapper[5114]: I0216 00:14:18.309615 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sm2s6" podStartSLOduration=3.625845506 podStartE2EDuration="4.309584278s" podCreationTimestamp="2026-02-16 00:14:14 +0000 UTC" firstStartedPulling="2026-02-16 00:14:15.187160066 +0000 UTC m=+331.568436894" lastFinishedPulling="2026-02-16 00:14:15.870898848 +0000 UTC m=+332.252175666" observedRunningTime="2026-02-16 00:14:18.305539635 +0000 UTC m=+334.686816473" watchObservedRunningTime="2026-02-16 00:14:18.309584278 +0000 UTC m=+334.690861096" Feb 16 00:14:21 crc kubenswrapper[5114]: I0216 00:14:21.802717 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:21 crc kubenswrapper[5114]: I0216 00:14:21.803214 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:21 crc kubenswrapper[5114]: I0216 00:14:21.873183 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:22 crc kubenswrapper[5114]: I0216 00:14:22.062789 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:22 crc kubenswrapper[5114]: I0216 00:14:22.062891 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:22 crc kubenswrapper[5114]: I0216 00:14:22.113960 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:22 crc kubenswrapper[5114]: I0216 00:14:22.350272 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:14:22 crc kubenswrapper[5114]: I0216 00:14:22.374466 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-858qd" Feb 16 00:14:24 crc kubenswrapper[5114]: I0216 00:14:24.227303 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:24 crc kubenswrapper[5114]: I0216 00:14:24.228336 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:24 crc kubenswrapper[5114]: I0216 00:14:24.308159 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:24 crc kubenswrapper[5114]: I0216 00:14:24.387942 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x55hq" Feb 16 00:14:24 crc kubenswrapper[5114]: I0216 00:14:24.416671 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:24 crc kubenswrapper[5114]: I0216 00:14:24.416725 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:24 crc kubenswrapper[5114]: I0216 00:14:24.514298 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:25 crc kubenswrapper[5114]: I0216 00:14:25.380818 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sm2s6" Feb 16 00:14:30 crc kubenswrapper[5114]: I0216 00:14:30.094580 5114 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 16 00:14:36 crc kubenswrapper[5114]: I0216 00:14:36.233909 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-rld8n" Feb 16 00:14:36 crc kubenswrapper[5114]: I0216 00:14:36.291554 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmt8j"] Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.192475 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h"] Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.211930 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h"] Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.212251 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.214841 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.225078 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.313597 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np6t9\" (UniqueName: \"kubernetes.io/projected/c5bac77b-5a58-4fb0-82d0-562b5e198434-kube-api-access-np6t9\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.313651 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5bac77b-5a58-4fb0-82d0-562b5e198434-secret-volume\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.313682 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5bac77b-5a58-4fb0-82d0-562b5e198434-config-volume\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.415319 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-np6t9\" (UniqueName: \"kubernetes.io/projected/c5bac77b-5a58-4fb0-82d0-562b5e198434-kube-api-access-np6t9\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.415401 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5bac77b-5a58-4fb0-82d0-562b5e198434-secret-volume\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.415470 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5bac77b-5a58-4fb0-82d0-562b5e198434-config-volume\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.417034 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5bac77b-5a58-4fb0-82d0-562b5e198434-config-volume\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.424150 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5bac77b-5a58-4fb0-82d0-562b5e198434-secret-volume\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.435494 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-np6t9\" (UniqueName: \"kubernetes.io/projected/c5bac77b-5a58-4fb0-82d0-562b5e198434-kube-api-access-np6t9\") pod \"collect-profiles-29520015-4lg5h\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:00 crc kubenswrapper[5114]: I0216 00:15:00.535873 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.029768 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h"] Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.354478 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" podUID="747ba08a-df9e-422d-be4e-f2ababc30dea" containerName="registry" containerID="cri-o://90d8a2a069abbd568392f18ee3971e6e788cfadda8bbbc654fe454a8696aed67" gracePeriod=30 Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.591332 5114 generic.go:358] "Generic (PLEG): container finished" podID="747ba08a-df9e-422d-be4e-f2ababc30dea" containerID="90d8a2a069abbd568392f18ee3971e6e788cfadda8bbbc654fe454a8696aed67" exitCode=0 Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.591431 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" event={"ID":"747ba08a-df9e-422d-be4e-f2ababc30dea","Type":"ContainerDied","Data":"90d8a2a069abbd568392f18ee3971e6e788cfadda8bbbc654fe454a8696aed67"} Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.594554 5114 generic.go:358] "Generic (PLEG): container finished" podID="c5bac77b-5a58-4fb0-82d0-562b5e198434" containerID="0cd5652b07878c7d57fa832620c4bc92202ef2a201dd736bec5c32fd787ffac6" exitCode=0 Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.594614 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" event={"ID":"c5bac77b-5a58-4fb0-82d0-562b5e198434","Type":"ContainerDied","Data":"0cd5652b07878c7d57fa832620c4bc92202ef2a201dd736bec5c32fd787ffac6"} Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.594688 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" event={"ID":"c5bac77b-5a58-4fb0-82d0-562b5e198434","Type":"ContainerStarted","Data":"8280682ae13e33700c5c7ee2b98c5048fea0e8b769c18e71e282751a5c11b5b9"} Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.774421 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.937424 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"747ba08a-df9e-422d-be4e-f2ababc30dea\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.937495 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-kube-api-access-5cqfg\") pod \"747ba08a-df9e-422d-be4e-f2ababc30dea\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.937522 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-bound-sa-token\") pod \"747ba08a-df9e-422d-be4e-f2ababc30dea\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.937562 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-tls\") pod \"747ba08a-df9e-422d-be4e-f2ababc30dea\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.937605 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/747ba08a-df9e-422d-be4e-f2ababc30dea-ca-trust-extracted\") pod \"747ba08a-df9e-422d-be4e-f2ababc30dea\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.937636 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-trusted-ca\") pod \"747ba08a-df9e-422d-be4e-f2ababc30dea\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.937719 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-certificates\") pod \"747ba08a-df9e-422d-be4e-f2ababc30dea\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.937838 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/747ba08a-df9e-422d-be4e-f2ababc30dea-installation-pull-secrets\") pod \"747ba08a-df9e-422d-be4e-f2ababc30dea\" (UID: \"747ba08a-df9e-422d-be4e-f2ababc30dea\") " Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.939748 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "747ba08a-df9e-422d-be4e-f2ababc30dea" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.939953 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "747ba08a-df9e-422d-be4e-f2ababc30dea" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.947009 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "747ba08a-df9e-422d-be4e-f2ababc30dea" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.948756 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747ba08a-df9e-422d-be4e-f2ababc30dea-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "747ba08a-df9e-422d-be4e-f2ababc30dea" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.948974 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "747ba08a-df9e-422d-be4e-f2ababc30dea" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.951080 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-kube-api-access-5cqfg" (OuterVolumeSpecName: "kube-api-access-5cqfg") pod "747ba08a-df9e-422d-be4e-f2ababc30dea" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea"). InnerVolumeSpecName "kube-api-access-5cqfg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.959177 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "747ba08a-df9e-422d-be4e-f2ababc30dea" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Feb 16 00:15:01 crc kubenswrapper[5114]: I0216 00:15:01.964069 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747ba08a-df9e-422d-be4e-f2ababc30dea-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "747ba08a-df9e-422d-be4e-f2ababc30dea" (UID: "747ba08a-df9e-422d-be4e-f2ababc30dea"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.040866 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5cqfg\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-kube-api-access-5cqfg\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.040949 5114 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.041014 5114 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.041037 5114 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/747ba08a-df9e-422d-be4e-f2ababc30dea-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.041057 5114 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.041111 5114 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/747ba08a-df9e-422d-be4e-f2ababc30dea-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.041130 5114 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/747ba08a-df9e-422d-be4e-f2ababc30dea-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.606961 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.606962 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kmt8j" event={"ID":"747ba08a-df9e-422d-be4e-f2ababc30dea","Type":"ContainerDied","Data":"a456cdef92bc4ed9155c3320e55bc1a4541f695ad91394cf65b898160f990b3b"} Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.607595 5114 scope.go:117] "RemoveContainer" containerID="90d8a2a069abbd568392f18ee3971e6e788cfadda8bbbc654fe454a8696aed67" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.647392 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmt8j"] Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.658227 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kmt8j"] Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.852364 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.952457 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-np6t9\" (UniqueName: \"kubernetes.io/projected/c5bac77b-5a58-4fb0-82d0-562b5e198434-kube-api-access-np6t9\") pod \"c5bac77b-5a58-4fb0-82d0-562b5e198434\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.952621 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5bac77b-5a58-4fb0-82d0-562b5e198434-secret-volume\") pod \"c5bac77b-5a58-4fb0-82d0-562b5e198434\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.952680 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5bac77b-5a58-4fb0-82d0-562b5e198434-config-volume\") pod \"c5bac77b-5a58-4fb0-82d0-562b5e198434\" (UID: \"c5bac77b-5a58-4fb0-82d0-562b5e198434\") " Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.953515 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5bac77b-5a58-4fb0-82d0-562b5e198434-config-volume" (OuterVolumeSpecName: "config-volume") pod "c5bac77b-5a58-4fb0-82d0-562b5e198434" (UID: "c5bac77b-5a58-4fb0-82d0-562b5e198434"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.957886 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5bac77b-5a58-4fb0-82d0-562b5e198434-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c5bac77b-5a58-4fb0-82d0-562b5e198434" (UID: "c5bac77b-5a58-4fb0-82d0-562b5e198434"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:15:02 crc kubenswrapper[5114]: I0216 00:15:02.958406 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5bac77b-5a58-4fb0-82d0-562b5e198434-kube-api-access-np6t9" (OuterVolumeSpecName: "kube-api-access-np6t9") pod "c5bac77b-5a58-4fb0-82d0-562b5e198434" (UID: "c5bac77b-5a58-4fb0-82d0-562b5e198434"). InnerVolumeSpecName "kube-api-access-np6t9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:15:03 crc kubenswrapper[5114]: I0216 00:15:03.054201 5114 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c5bac77b-5a58-4fb0-82d0-562b5e198434-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:03 crc kubenswrapper[5114]: I0216 00:15:03.054314 5114 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5bac77b-5a58-4fb0-82d0-562b5e198434-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:03 crc kubenswrapper[5114]: I0216 00:15:03.054335 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-np6t9\" (UniqueName: \"kubernetes.io/projected/c5bac77b-5a58-4fb0-82d0-562b5e198434-kube-api-access-np6t9\") on node \"crc\" DevicePath \"\"" Feb 16 00:15:03 crc kubenswrapper[5114]: I0216 00:15:03.906232 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" Feb 16 00:15:03 crc kubenswrapper[5114]: I0216 00:15:03.907748 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747ba08a-df9e-422d-be4e-f2ababc30dea" path="/var/lib/kubelet/pods/747ba08a-df9e-422d-be4e-f2ababc30dea/volumes" Feb 16 00:15:03 crc kubenswrapper[5114]: I0216 00:15:03.908965 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520015-4lg5h" event={"ID":"c5bac77b-5a58-4fb0-82d0-562b5e198434","Type":"ContainerDied","Data":"8280682ae13e33700c5c7ee2b98c5048fea0e8b769c18e71e282751a5c11b5b9"} Feb 16 00:15:03 crc kubenswrapper[5114]: I0216 00:15:03.909016 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8280682ae13e33700c5c7ee2b98c5048fea0e8b769c18e71e282751a5c11b5b9" Feb 16 00:15:50 crc kubenswrapper[5114]: I0216 00:15:50.086053 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:15:50 crc kubenswrapper[5114]: I0216 00:15:50.087342 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.151794 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520016-jfhs6"] Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.155742 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c5bac77b-5a58-4fb0-82d0-562b5e198434" containerName="collect-profiles" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.155805 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5bac77b-5a58-4fb0-82d0-562b5e198434" containerName="collect-profiles" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.155850 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="747ba08a-df9e-422d-be4e-f2ababc30dea" containerName="registry" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.155859 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="747ba08a-df9e-422d-be4e-f2ababc30dea" containerName="registry" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.156498 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="747ba08a-df9e-422d-be4e-f2ababc30dea" containerName="registry" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.156540 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="c5bac77b-5a58-4fb0-82d0-562b5e198434" containerName="collect-profiles" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.172533 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520016-jfhs6" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.178001 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.178329 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.178770 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.188564 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520016-jfhs6"] Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.226791 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqsrf\" (UniqueName: \"kubernetes.io/projected/932e8fef-e1b4-4e9c-a29d-5460a6497aa3-kube-api-access-pqsrf\") pod \"auto-csr-approver-29520016-jfhs6\" (UID: \"932e8fef-e1b4-4e9c-a29d-5460a6497aa3\") " pod="openshift-infra/auto-csr-approver-29520016-jfhs6" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.329376 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pqsrf\" (UniqueName: \"kubernetes.io/projected/932e8fef-e1b4-4e9c-a29d-5460a6497aa3-kube-api-access-pqsrf\") pod \"auto-csr-approver-29520016-jfhs6\" (UID: \"932e8fef-e1b4-4e9c-a29d-5460a6497aa3\") " pod="openshift-infra/auto-csr-approver-29520016-jfhs6" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.369416 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqsrf\" (UniqueName: \"kubernetes.io/projected/932e8fef-e1b4-4e9c-a29d-5460a6497aa3-kube-api-access-pqsrf\") pod \"auto-csr-approver-29520016-jfhs6\" (UID: \"932e8fef-e1b4-4e9c-a29d-5460a6497aa3\") " pod="openshift-infra/auto-csr-approver-29520016-jfhs6" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.498373 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520016-jfhs6" Feb 16 00:16:00 crc kubenswrapper[5114]: I0216 00:16:00.969962 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520016-jfhs6"] Feb 16 00:16:01 crc kubenswrapper[5114]: I0216 00:16:01.328928 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520016-jfhs6" event={"ID":"932e8fef-e1b4-4e9c-a29d-5460a6497aa3","Type":"ContainerStarted","Data":"eaea625c2981a7a882139dd10f3598f80dcc01b72987ce376877087aaba9c34f"} Feb 16 00:16:04 crc kubenswrapper[5114]: I0216 00:16:04.796341 5114 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-5tcsr" Feb 16 00:16:04 crc kubenswrapper[5114]: I0216 00:16:04.843600 5114 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-5tcsr" Feb 16 00:16:05 crc kubenswrapper[5114]: I0216 00:16:05.361831 5114 generic.go:358] "Generic (PLEG): container finished" podID="932e8fef-e1b4-4e9c-a29d-5460a6497aa3" containerID="8df397633cead5b193b3d652bf4ed302a5acdefe24ccc6fa92c11bc346083e71" exitCode=0 Feb 16 00:16:05 crc kubenswrapper[5114]: I0216 00:16:05.361970 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520016-jfhs6" event={"ID":"932e8fef-e1b4-4e9c-a29d-5460a6497aa3","Type":"ContainerDied","Data":"8df397633cead5b193b3d652bf4ed302a5acdefe24ccc6fa92c11bc346083e71"} Feb 16 00:16:05 crc kubenswrapper[5114]: I0216 00:16:05.845553 5114 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-18 00:11:04 +0000 UTC" deadline="2026-03-09 02:31:43.837567383 +0000 UTC" Feb 16 00:16:05 crc kubenswrapper[5114]: I0216 00:16:05.845612 5114 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="506h15m37.991957866s" Feb 16 00:16:06 crc kubenswrapper[5114]: I0216 00:16:06.749660 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520016-jfhs6" Feb 16 00:16:06 crc kubenswrapper[5114]: I0216 00:16:06.756767 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqsrf\" (UniqueName: \"kubernetes.io/projected/932e8fef-e1b4-4e9c-a29d-5460a6497aa3-kube-api-access-pqsrf\") pod \"932e8fef-e1b4-4e9c-a29d-5460a6497aa3\" (UID: \"932e8fef-e1b4-4e9c-a29d-5460a6497aa3\") " Feb 16 00:16:06 crc kubenswrapper[5114]: I0216 00:16:06.772854 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/932e8fef-e1b4-4e9c-a29d-5460a6497aa3-kube-api-access-pqsrf" (OuterVolumeSpecName: "kube-api-access-pqsrf") pod "932e8fef-e1b4-4e9c-a29d-5460a6497aa3" (UID: "932e8fef-e1b4-4e9c-a29d-5460a6497aa3"). InnerVolumeSpecName "kube-api-access-pqsrf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:16:06 crc kubenswrapper[5114]: I0216 00:16:06.845784 5114 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-03-18 00:11:04 +0000 UTC" deadline="2026-03-12 04:23:05.014839251 +0000 UTC" Feb 16 00:16:06 crc kubenswrapper[5114]: I0216 00:16:06.845837 5114 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="580h6m58.169005445s" Feb 16 00:16:06 crc kubenswrapper[5114]: I0216 00:16:06.859084 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pqsrf\" (UniqueName: \"kubernetes.io/projected/932e8fef-e1b4-4e9c-a29d-5460a6497aa3-kube-api-access-pqsrf\") on node \"crc\" DevicePath \"\"" Feb 16 00:16:07 crc kubenswrapper[5114]: I0216 00:16:07.429713 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520016-jfhs6" Feb 16 00:16:07 crc kubenswrapper[5114]: I0216 00:16:07.429791 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520016-jfhs6" event={"ID":"932e8fef-e1b4-4e9c-a29d-5460a6497aa3","Type":"ContainerDied","Data":"eaea625c2981a7a882139dd10f3598f80dcc01b72987ce376877087aaba9c34f"} Feb 16 00:16:07 crc kubenswrapper[5114]: I0216 00:16:07.429857 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaea625c2981a7a882139dd10f3598f80dcc01b72987ce376877087aaba9c34f" Feb 16 00:16:20 crc kubenswrapper[5114]: I0216 00:16:20.085185 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:16:20 crc kubenswrapper[5114]: I0216 00:16:20.085755 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.085465 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.086723 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.086821 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.088053 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a3ce095df471cd9bc6cb7b32e5ca37c749a18ef9c74e7e6da2f540e061ab35d"} pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.088184 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" containerID="cri-o://8a3ce095df471cd9bc6cb7b32e5ca37c749a18ef9c74e7e6da2f540e061ab35d" gracePeriod=600 Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.761273 5114 generic.go:358] "Generic (PLEG): container finished" podID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerID="8a3ce095df471cd9bc6cb7b32e5ca37c749a18ef9c74e7e6da2f540e061ab35d" exitCode=0 Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.761286 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerDied","Data":"8a3ce095df471cd9bc6cb7b32e5ca37c749a18ef9c74e7e6da2f540e061ab35d"} Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.762469 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"e134b7537fe941db009f9833124e34b05d191a4535dab34b636141af6e8135c3"} Feb 16 00:16:50 crc kubenswrapper[5114]: I0216 00:16:50.762491 5114 scope.go:117] "RemoveContainer" containerID="e129ae4ee7d3742ba2d538ce3a74a1fc75d899264cde2462cc24760ecb7481d2" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.151831 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520018-5lsvl"] Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.154972 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="932e8fef-e1b4-4e9c-a29d-5460a6497aa3" containerName="oc" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.155010 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="932e8fef-e1b4-4e9c-a29d-5460a6497aa3" containerName="oc" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.155301 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="932e8fef-e1b4-4e9c-a29d-5460a6497aa3" containerName="oc" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.173184 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520018-5lsvl"] Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.173447 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520018-5lsvl" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.177829 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.178220 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.178352 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.361783 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpvj9\" (UniqueName: \"kubernetes.io/projected/c67c5be5-e4b3-47d6-a4c7-95cba7f5830b-kube-api-access-hpvj9\") pod \"auto-csr-approver-29520018-5lsvl\" (UID: \"c67c5be5-e4b3-47d6-a4c7-95cba7f5830b\") " pod="openshift-infra/auto-csr-approver-29520018-5lsvl" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.463579 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hpvj9\" (UniqueName: \"kubernetes.io/projected/c67c5be5-e4b3-47d6-a4c7-95cba7f5830b-kube-api-access-hpvj9\") pod \"auto-csr-approver-29520018-5lsvl\" (UID: \"c67c5be5-e4b3-47d6-a4c7-95cba7f5830b\") " pod="openshift-infra/auto-csr-approver-29520018-5lsvl" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.498868 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpvj9\" (UniqueName: \"kubernetes.io/projected/c67c5be5-e4b3-47d6-a4c7-95cba7f5830b-kube-api-access-hpvj9\") pod \"auto-csr-approver-29520018-5lsvl\" (UID: \"c67c5be5-e4b3-47d6-a4c7-95cba7f5830b\") " pod="openshift-infra/auto-csr-approver-29520018-5lsvl" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.506801 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520018-5lsvl" Feb 16 00:18:00 crc kubenswrapper[5114]: I0216 00:18:00.777472 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520018-5lsvl"] Feb 16 00:18:01 crc kubenswrapper[5114]: I0216 00:18:01.285915 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520018-5lsvl" event={"ID":"c67c5be5-e4b3-47d6-a4c7-95cba7f5830b","Type":"ContainerStarted","Data":"547731a39dfcf92dde0e6ca713732bf1e3516c5e1509357a64477c075456fede"} Feb 16 00:18:02 crc kubenswrapper[5114]: I0216 00:18:02.299311 5114 generic.go:358] "Generic (PLEG): container finished" podID="c67c5be5-e4b3-47d6-a4c7-95cba7f5830b" containerID="8bd5f4ce0c03de6b040840a3a83bd1508fe7fce6170f120c9ff883f946c8e06b" exitCode=0 Feb 16 00:18:02 crc kubenswrapper[5114]: I0216 00:18:02.299404 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520018-5lsvl" event={"ID":"c67c5be5-e4b3-47d6-a4c7-95cba7f5830b","Type":"ContainerDied","Data":"8bd5f4ce0c03de6b040840a3a83bd1508fe7fce6170f120c9ff883f946c8e06b"} Feb 16 00:18:03 crc kubenswrapper[5114]: I0216 00:18:03.704041 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520018-5lsvl" Feb 16 00:18:03 crc kubenswrapper[5114]: I0216 00:18:03.767890 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpvj9\" (UniqueName: \"kubernetes.io/projected/c67c5be5-e4b3-47d6-a4c7-95cba7f5830b-kube-api-access-hpvj9\") pod \"c67c5be5-e4b3-47d6-a4c7-95cba7f5830b\" (UID: \"c67c5be5-e4b3-47d6-a4c7-95cba7f5830b\") " Feb 16 00:18:03 crc kubenswrapper[5114]: I0216 00:18:03.776066 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c67c5be5-e4b3-47d6-a4c7-95cba7f5830b-kube-api-access-hpvj9" (OuterVolumeSpecName: "kube-api-access-hpvj9") pod "c67c5be5-e4b3-47d6-a4c7-95cba7f5830b" (UID: "c67c5be5-e4b3-47d6-a4c7-95cba7f5830b"). InnerVolumeSpecName "kube-api-access-hpvj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:18:03 crc kubenswrapper[5114]: I0216 00:18:03.870920 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hpvj9\" (UniqueName: \"kubernetes.io/projected/c67c5be5-e4b3-47d6-a4c7-95cba7f5830b-kube-api-access-hpvj9\") on node \"crc\" DevicePath \"\"" Feb 16 00:18:04 crc kubenswrapper[5114]: I0216 00:18:04.483825 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520018-5lsvl" Feb 16 00:18:04 crc kubenswrapper[5114]: I0216 00:18:04.483821 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520018-5lsvl" event={"ID":"c67c5be5-e4b3-47d6-a4c7-95cba7f5830b","Type":"ContainerDied","Data":"547731a39dfcf92dde0e6ca713732bf1e3516c5e1509357a64477c075456fede"} Feb 16 00:18:04 crc kubenswrapper[5114]: I0216 00:18:04.484482 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="547731a39dfcf92dde0e6ca713732bf1e3516c5e1509357a64477c075456fede" Feb 16 00:18:46 crc kubenswrapper[5114]: I0216 00:18:46.179516 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:18:46 crc kubenswrapper[5114]: I0216 00:18:46.184033 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:18:50 crc kubenswrapper[5114]: I0216 00:18:50.085106 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:18:50 crc kubenswrapper[5114]: I0216 00:18:50.087132 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.054608 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf"] Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.055958 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerName="kube-rbac-proxy" containerID="cri-o://57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.056014 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerName="ovnkube-cluster-manager" containerID="cri-o://ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.235824 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.273232 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9clwb"] Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.274083 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovn-controller" containerID="cri-o://bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.274161 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="sbdb" containerID="cri-o://578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.274314 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="northd" containerID="cri-o://a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.274232 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="nbdb" containerID="cri-o://48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.274399 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.274455 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kube-rbac-proxy-node" containerID="cri-o://04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.274508 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovn-acl-logging" containerID="cri-o://7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.282977 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl"] Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.283879 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c67c5be5-e4b3-47d6-a4c7-95cba7f5830b" containerName="oc" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.283903 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="c67c5be5-e4b3-47d6-a4c7-95cba7f5830b" containerName="oc" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.283940 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerName="ovnkube-cluster-manager" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.283946 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerName="ovnkube-cluster-manager" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.283956 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerName="kube-rbac-proxy" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.283963 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerName="kube-rbac-proxy" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.284067 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="c67c5be5-e4b3-47d6-a4c7-95cba7f5830b" containerName="oc" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.284084 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerName="kube-rbac-proxy" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.284093 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerName="ovnkube-cluster-manager" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.292661 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.302930 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovnkube-controller" containerID="cri-o://707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea" gracePeriod=30 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.305323 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovn-control-plane-metrics-cert\") pod \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.305373 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-env-overrides\") pod \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.305409 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovnkube-config\") pod \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.305523 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phgcx\" (UniqueName: \"kubernetes.io/projected/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-kube-api-access-phgcx\") pod \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\" (UID: \"1a832ec7-da6a-4e0b-8b74-47f2038c0c13\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.306845 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "1a832ec7-da6a-4e0b-8b74-47f2038c0c13" (UID: "1a832ec7-da6a-4e0b-8b74-47f2038c0c13"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.307619 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "1a832ec7-da6a-4e0b-8b74-47f2038c0c13" (UID: "1a832ec7-da6a-4e0b-8b74-47f2038c0c13"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.326423 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-kube-api-access-phgcx" (OuterVolumeSpecName: "kube-api-access-phgcx") pod "1a832ec7-da6a-4e0b-8b74-47f2038c0c13" (UID: "1a832ec7-da6a-4e0b-8b74-47f2038c0c13"). InnerVolumeSpecName "kube-api-access-phgcx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.330772 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "1a832ec7-da6a-4e0b-8b74-47f2038c0c13" (UID: "1a832ec7-da6a-4e0b-8b74-47f2038c0c13"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.407311 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.407392 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.407534 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.407868 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n6ds\" (UniqueName: \"kubernetes.io/projected/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-kube-api-access-9n6ds\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.408030 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.408052 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-phgcx\" (UniqueName: \"kubernetes.io/projected/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-kube-api-access-phgcx\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.408067 5114 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.408077 5114 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a832ec7-da6a-4e0b-8b74-47f2038c0c13-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.509031 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9n6ds\" (UniqueName: \"kubernetes.io/projected/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-kube-api-access-9n6ds\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.509163 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.509867 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.509952 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.510752 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.512172 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.514344 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.524731 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n6ds\" (UniqueName: \"kubernetes.io/projected/a6400a1e-d6ac-4457-b471-c7c6347a8a8d-kube-api-access-9n6ds\") pod \"ovnkube-control-plane-97c9b6c48-rqxkl\" (UID: \"a6400a1e-d6ac-4457-b471-c7c6347a8a8d\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.634038 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9clwb_6b3c2120-6c92-4855-86fc-a08ba5b7f48c/ovn-acl-logging/0.log" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.634593 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9clwb_6b3c2120-6c92-4855-86fc-a08ba5b7f48c/ovn-controller/0.log" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.635117 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.702223 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ww2t5"] Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.702966 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.702995 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703023 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="northd" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703032 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="northd" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703049 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="nbdb" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703057 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="nbdb" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703069 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovn-controller" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703078 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovn-controller" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703095 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="sbdb" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703103 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="sbdb" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703114 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovnkube-controller" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703122 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovnkube-controller" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703130 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kubecfg-setup" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703139 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kubecfg-setup" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703151 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kube-rbac-proxy-node" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703159 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kube-rbac-proxy-node" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703172 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovn-acl-logging" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703180 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovn-acl-logging" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703313 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="northd" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703330 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovn-controller" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703341 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovn-acl-logging" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703351 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="sbdb" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703361 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kube-rbac-proxy-node" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703377 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="kube-rbac-proxy-ovn-metrics" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703385 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="ovnkube-controller" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.703395 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerName="nbdb" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.708540 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.712664 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-systemd-units\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.712769 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-slash\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.712800 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.712855 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxrth\" (UniqueName: \"kubernetes.io/projected/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-kube-api-access-qxrth\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.712872 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-slash" (OuterVolumeSpecName: "host-slash") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.712897 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-env-overrides\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713103 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713159 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-ovn-kubernetes\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713240 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-netd\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713243 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713289 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-systemd\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713389 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-etc-openvswitch\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713377 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713470 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-bin\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713396 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713514 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-config\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713544 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-log-socket\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713544 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713651 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-log-socket" (OuterVolumeSpecName: "log-socket") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713695 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.713939 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovn-node-metrics-cert\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714015 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-node-log\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714081 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-kubelet\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714115 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-netns\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714162 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-script-lib\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714225 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-var-lib-openvswitch\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714297 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-ovn\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714350 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-openvswitch\") pod \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\" (UID: \"6b3c2120-6c92-4855-86fc-a08ba5b7f48c\") " Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714442 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.714624 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-node-log" (OuterVolumeSpecName: "node-log") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715040 5114 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715148 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715153 5114 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715178 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715180 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715211 5114 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715198 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715235 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715203 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715286 5114 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715311 5114 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715328 5114 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-log-socket\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715343 5114 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-node-log\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715357 5114 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715373 5114 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715389 5114 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-slash\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.715497 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.718923 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.719312 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-kube-api-access-qxrth" (OuterVolumeSpecName: "kube-api-access-qxrth") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "kube-api-access-qxrth". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.721761 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.738627 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6b3c2120-6c92-4855-86fc-a08ba5b7f48c" (UID: "6b3c2120-6c92-4855-86fc-a08ba5b7f48c"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.757783 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.817577 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-run-ovn-kubernetes\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.817655 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-env-overrides\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.817705 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-cni-netd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.817747 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-systemd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.817920 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-cni-bin\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.818007 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-systemd-units\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.818272 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovnkube-script-lib\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.818403 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-run-netns\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.818510 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-etc-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.818566 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpzgd\" (UniqueName: \"kubernetes.io/projected/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-kube-api-access-qpzgd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.818645 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-slash\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.818681 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-ovn\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819105 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819200 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-var-lib-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819292 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-log-socket\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819424 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovnkube-config\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819511 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819549 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-node-log\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819630 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-kubelet\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819709 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovn-node-metrics-cert\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819862 5114 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819897 5114 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819919 5114 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819942 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qxrth\" (UniqueName: \"kubernetes.io/projected/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-kube-api-access-qxrth\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819965 5114 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.819987 5114 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.820009 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.820030 5114 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.820047 5114 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.820065 5114 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b3c2120-6c92-4855-86fc-a08ba5b7f48c-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921345 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-var-lib-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921476 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-log-socket\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921552 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovnkube-config\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921620 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-log-socket\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921684 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-var-lib-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921754 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921778 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-node-log\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921804 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-kubelet\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921826 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovn-node-metrics-cert\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921859 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-run-ovn-kubernetes\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921880 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-env-overrides\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921903 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-cni-netd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921925 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-systemd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921933 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-node-log\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921948 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-cni-bin\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921993 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-systemd-units\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922013 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovnkube-script-lib\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922058 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-run-netns\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922081 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-etc-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922102 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qpzgd\" (UniqueName: \"kubernetes.io/projected/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-kube-api-access-qpzgd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922119 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-slash\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922135 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-ovn\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922176 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922278 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922427 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovnkube-config\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922686 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-run-ovn-kubernetes\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922743 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-etc-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922784 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-env-overrides\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922835 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-ovn\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922849 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-slash\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922865 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-run-netns\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922922 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-systemd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.922952 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-cni-netd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921901 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-kubelet\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.921855 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-run-openvswitch\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.923005 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-host-cni-bin\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.923039 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-systemd-units\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.923064 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovnkube-script-lib\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.925936 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-ovn-node-metrics-cert\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.942158 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpzgd\" (UniqueName: \"kubernetes.io/projected/d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81-kube-api-access-qpzgd\") pod \"ovnkube-node-ww2t5\" (UID: \"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81\") " pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.945924 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" event={"ID":"a6400a1e-d6ac-4457-b471-c7c6347a8a8d","Type":"ContainerStarted","Data":"4363aba53b528c58ec828a86078614e1f2a5b3e9a1b9d4964248fc96bd237f0a"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.947715 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jlj6_c4627438-b1a6-4cc9-85f6-10e9dd97943b/kube-multus/0.log" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.947749 5114 generic.go:358] "Generic (PLEG): container finished" podID="c4627438-b1a6-4cc9-85f6-10e9dd97943b" containerID="c83dc83d3735a8f6a2016857bcda28e79e5e7c3dc6e7dc96fdff987a03f69e42" exitCode=2 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.947782 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jlj6" event={"ID":"c4627438-b1a6-4cc9-85f6-10e9dd97943b","Type":"ContainerDied","Data":"c83dc83d3735a8f6a2016857bcda28e79e5e7c3dc6e7dc96fdff987a03f69e42"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.948804 5114 scope.go:117] "RemoveContainer" containerID="c83dc83d3735a8f6a2016857bcda28e79e5e7c3dc6e7dc96fdff987a03f69e42" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.953360 5114 generic.go:358] "Generic (PLEG): container finished" podID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerID="ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616" exitCode=0 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.953389 5114 generic.go:358] "Generic (PLEG): container finished" podID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" containerID="57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044" exitCode=0 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.953794 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.957345 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" event={"ID":"1a832ec7-da6a-4e0b-8b74-47f2038c0c13","Type":"ContainerDied","Data":"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.957415 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" event={"ID":"1a832ec7-da6a-4e0b-8b74-47f2038c0c13","Type":"ContainerDied","Data":"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.957437 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf" event={"ID":"1a832ec7-da6a-4e0b-8b74-47f2038c0c13","Type":"ContainerDied","Data":"892d2b4ac5d8bfb4f0f72f70eefb56d9ccaf4de7777ead9a2b067bdc2c88ae69"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.957466 5114 scope.go:117] "RemoveContainer" containerID="ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.962629 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9clwb_6b3c2120-6c92-4855-86fc-a08ba5b7f48c/ovn-acl-logging/0.log" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.963173 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-9clwb_6b3c2120-6c92-4855-86fc-a08ba5b7f48c/ovn-controller/0.log" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.963884 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea" exitCode=0 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.963906 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1" exitCode=0 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964065 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964116 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964133 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964663 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964719 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c" exitCode=0 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964794 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8" exitCode=0 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964806 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af" exitCode=0 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964813 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b" exitCode=0 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964821 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d" exitCode=143 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964835 5114 generic.go:358] "Generic (PLEG): container finished" podID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" containerID="bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3" exitCode=143 Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964856 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.964875 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965473 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965498 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965545 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965551 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965557 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965562 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965567 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965572 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965578 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965583 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965592 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965605 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965612 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965619 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965625 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965631 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965637 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965642 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965648 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965653 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965660 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965667 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965676 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965681 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965686 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965692 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965697 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965702 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965708 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965713 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965720 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9clwb" event={"ID":"6b3c2120-6c92-4855-86fc-a08ba5b7f48c","Type":"ContainerDied","Data":"9015acb8881104f78edb88af78faa0f1ff7c5e163a8507213340ebf1a7c54e64"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965728 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965735 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965740 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965746 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965752 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965757 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965762 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965767 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.965772 5114 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.996769 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf"] Feb 16 00:19:00 crc kubenswrapper[5114]: I0216 00:19:00.997119 5114 scope.go:117] "RemoveContainer" containerID="57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.002472 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-44hnf"] Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.035182 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.048900 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9clwb"] Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.051617 5114 scope.go:117] "RemoveContainer" containerID="ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.053307 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616\": container with ID starting with ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616 not found: ID does not exist" containerID="ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.053352 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616"} err="failed to get container status \"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616\": rpc error: code = NotFound desc = could not find container \"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616\": container with ID starting with ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.053378 5114 scope.go:117] "RemoveContainer" containerID="57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.053799 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044\": container with ID starting with 57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044 not found: ID does not exist" containerID="57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.053824 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044"} err="failed to get container status \"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044\": rpc error: code = NotFound desc = could not find container \"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044\": container with ID starting with 57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.053839 5114 scope.go:117] "RemoveContainer" containerID="ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.054167 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616"} err="failed to get container status \"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616\": rpc error: code = NotFound desc = could not find container \"ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616\": container with ID starting with ef7c7b052f39f66b9505cc7a9b6fffb9f3824ac92094f4cd79c0c1d4e9924616 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.054185 5114 scope.go:117] "RemoveContainer" containerID="57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.054482 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044"} err="failed to get container status \"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044\": rpc error: code = NotFound desc = could not find container \"57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044\": container with ID starting with 57e6ae8c2dff50ca2264d69c406e978c04f5f1db92566cb2f519be7031ace044 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.054505 5114 scope.go:117] "RemoveContainer" containerID="707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.058979 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9clwb"] Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.083650 5114 scope.go:117] "RemoveContainer" containerID="578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1" Feb 16 00:19:01 crc kubenswrapper[5114]: W0216 00:19:01.094172 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd30ff5c2_cac2_44a3_bdb0_85a68ee1bd81.slice/crio-4b328bdf4108c103f46aa6e6d715b92e23e967d9d7cbd7437de0828c64aad8e7 WatchSource:0}: Error finding container 4b328bdf4108c103f46aa6e6d715b92e23e967d9d7cbd7437de0828c64aad8e7: Status 404 returned error can't find the container with id 4b328bdf4108c103f46aa6e6d715b92e23e967d9d7cbd7437de0828c64aad8e7 Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.100979 5114 scope.go:117] "RemoveContainer" containerID="48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.125165 5114 scope.go:117] "RemoveContainer" containerID="a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.145759 5114 scope.go:117] "RemoveContainer" containerID="81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.172162 5114 scope.go:117] "RemoveContainer" containerID="04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.250976 5114 scope.go:117] "RemoveContainer" containerID="7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.294067 5114 scope.go:117] "RemoveContainer" containerID="bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.330947 5114 scope.go:117] "RemoveContainer" containerID="df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.352634 5114 scope.go:117] "RemoveContainer" containerID="707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.353220 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": container with ID starting with 707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea not found: ID does not exist" containerID="707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.353284 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} err="failed to get container status \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": rpc error: code = NotFound desc = could not find container \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": container with ID starting with 707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.353318 5114 scope.go:117] "RemoveContainer" containerID="578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.353607 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": container with ID starting with 578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1 not found: ID does not exist" containerID="578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.353645 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} err="failed to get container status \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": rpc error: code = NotFound desc = could not find container \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": container with ID starting with 578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.353669 5114 scope.go:117] "RemoveContainer" containerID="48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.354036 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": container with ID starting with 48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c not found: ID does not exist" containerID="48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.354069 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} err="failed to get container status \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": rpc error: code = NotFound desc = could not find container \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": container with ID starting with 48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.354090 5114 scope.go:117] "RemoveContainer" containerID="a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.354426 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": container with ID starting with a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8 not found: ID does not exist" containerID="a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.354461 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} err="failed to get container status \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": rpc error: code = NotFound desc = could not find container \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": container with ID starting with a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.354480 5114 scope.go:117] "RemoveContainer" containerID="81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.354807 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": container with ID starting with 81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af not found: ID does not exist" containerID="81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.354845 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} err="failed to get container status \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": rpc error: code = NotFound desc = could not find container \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": container with ID starting with 81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.354870 5114 scope.go:117] "RemoveContainer" containerID="04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.355166 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": container with ID starting with 04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b not found: ID does not exist" containerID="04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.355201 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} err="failed to get container status \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": rpc error: code = NotFound desc = could not find container \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": container with ID starting with 04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.355224 5114 scope.go:117] "RemoveContainer" containerID="7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.355677 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": container with ID starting with 7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d not found: ID does not exist" containerID="7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.355712 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} err="failed to get container status \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": rpc error: code = NotFound desc = could not find container \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": container with ID starting with 7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.355737 5114 scope.go:117] "RemoveContainer" containerID="bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.356166 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": container with ID starting with bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3 not found: ID does not exist" containerID="bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.356198 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} err="failed to get container status \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": rpc error: code = NotFound desc = could not find container \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": container with ID starting with bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.356240 5114 scope.go:117] "RemoveContainer" containerID="df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063" Feb 16 00:19:01 crc kubenswrapper[5114]: E0216 00:19:01.356590 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": container with ID starting with df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063 not found: ID does not exist" containerID="df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.356627 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} err="failed to get container status \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": rpc error: code = NotFound desc = could not find container \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": container with ID starting with df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.356646 5114 scope.go:117] "RemoveContainer" containerID="707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.356986 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} err="failed to get container status \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": rpc error: code = NotFound desc = could not find container \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": container with ID starting with 707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.357013 5114 scope.go:117] "RemoveContainer" containerID="578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.357415 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} err="failed to get container status \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": rpc error: code = NotFound desc = could not find container \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": container with ID starting with 578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.357440 5114 scope.go:117] "RemoveContainer" containerID="48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.357762 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} err="failed to get container status \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": rpc error: code = NotFound desc = could not find container \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": container with ID starting with 48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.357789 5114 scope.go:117] "RemoveContainer" containerID="a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.358182 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} err="failed to get container status \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": rpc error: code = NotFound desc = could not find container \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": container with ID starting with a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.358216 5114 scope.go:117] "RemoveContainer" containerID="81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.358635 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} err="failed to get container status \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": rpc error: code = NotFound desc = could not find container \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": container with ID starting with 81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.358694 5114 scope.go:117] "RemoveContainer" containerID="04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.359022 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} err="failed to get container status \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": rpc error: code = NotFound desc = could not find container \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": container with ID starting with 04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.359051 5114 scope.go:117] "RemoveContainer" containerID="7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.359341 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} err="failed to get container status \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": rpc error: code = NotFound desc = could not find container \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": container with ID starting with 7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.359370 5114 scope.go:117] "RemoveContainer" containerID="bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.359691 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} err="failed to get container status \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": rpc error: code = NotFound desc = could not find container \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": container with ID starting with bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.359719 5114 scope.go:117] "RemoveContainer" containerID="df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.360095 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} err="failed to get container status \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": rpc error: code = NotFound desc = could not find container \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": container with ID starting with df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.360120 5114 scope.go:117] "RemoveContainer" containerID="707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.360517 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} err="failed to get container status \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": rpc error: code = NotFound desc = could not find container \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": container with ID starting with 707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.360558 5114 scope.go:117] "RemoveContainer" containerID="578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.360897 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} err="failed to get container status \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": rpc error: code = NotFound desc = could not find container \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": container with ID starting with 578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.360926 5114 scope.go:117] "RemoveContainer" containerID="48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.361288 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} err="failed to get container status \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": rpc error: code = NotFound desc = could not find container \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": container with ID starting with 48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.361313 5114 scope.go:117] "RemoveContainer" containerID="a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.361608 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} err="failed to get container status \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": rpc error: code = NotFound desc = could not find container \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": container with ID starting with a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.361633 5114 scope.go:117] "RemoveContainer" containerID="81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.362773 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} err="failed to get container status \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": rpc error: code = NotFound desc = could not find container \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": container with ID starting with 81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.362797 5114 scope.go:117] "RemoveContainer" containerID="04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.364058 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} err="failed to get container status \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": rpc error: code = NotFound desc = could not find container \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": container with ID starting with 04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.364089 5114 scope.go:117] "RemoveContainer" containerID="7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.365840 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} err="failed to get container status \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": rpc error: code = NotFound desc = could not find container \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": container with ID starting with 7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.365869 5114 scope.go:117] "RemoveContainer" containerID="bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.366201 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} err="failed to get container status \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": rpc error: code = NotFound desc = could not find container \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": container with ID starting with bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.366227 5114 scope.go:117] "RemoveContainer" containerID="df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.366736 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} err="failed to get container status \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": rpc error: code = NotFound desc = could not find container \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": container with ID starting with df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.366762 5114 scope.go:117] "RemoveContainer" containerID="707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.367056 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea"} err="failed to get container status \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": rpc error: code = NotFound desc = could not find container \"707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea\": container with ID starting with 707bd299659783e4d9f67413efa410d28c5331355be4bab4ed494932bdd945ea not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.367085 5114 scope.go:117] "RemoveContainer" containerID="578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.367388 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1"} err="failed to get container status \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": rpc error: code = NotFound desc = could not find container \"578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1\": container with ID starting with 578290cba618c0bdfc8bb97e4ba8846fc38602d1e3b472f7b80b183118044cc1 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.367418 5114 scope.go:117] "RemoveContainer" containerID="48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.367708 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c"} err="failed to get container status \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": rpc error: code = NotFound desc = could not find container \"48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c\": container with ID starting with 48be9dae4dfe678fe38edda4be323b0a90809dce681d99dc674f6da8790c844c not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.367733 5114 scope.go:117] "RemoveContainer" containerID="a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.367994 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8"} err="failed to get container status \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": rpc error: code = NotFound desc = could not find container \"a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8\": container with ID starting with a3f9e9802d3fdea1ec4adb9209f07fc5e05ec051fac6f4a2fc463a296ff9f4e8 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.368019 5114 scope.go:117] "RemoveContainer" containerID="81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.368301 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af"} err="failed to get container status \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": rpc error: code = NotFound desc = could not find container \"81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af\": container with ID starting with 81a2af445c890fbd679ec202d0790e7f9e1a5307cb9ef52f7210fbcff8f3f9af not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.368333 5114 scope.go:117] "RemoveContainer" containerID="04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.368610 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b"} err="failed to get container status \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": rpc error: code = NotFound desc = could not find container \"04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b\": container with ID starting with 04135d799e64c2eabec33692612cfd88b78e247101f502285987a66d80fafd6b not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.368643 5114 scope.go:117] "RemoveContainer" containerID="7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.371774 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d"} err="failed to get container status \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": rpc error: code = NotFound desc = could not find container \"7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d\": container with ID starting with 7a624af939c1cc7288f73c787ce3cb815a32e66311003d3544e828e268b7c22d not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.371813 5114 scope.go:117] "RemoveContainer" containerID="bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.372150 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3"} err="failed to get container status \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": rpc error: code = NotFound desc = could not find container \"bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3\": container with ID starting with bab884b624a35b8a03b70080c74c46c2985e24fe5c5cc420eba12793d26b3db3 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.372179 5114 scope.go:117] "RemoveContainer" containerID="df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.372472 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063"} err="failed to get container status \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": rpc error: code = NotFound desc = could not find container \"df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063\": container with ID starting with df24b305ed8a5bf8fba93201f8a4740efe3897afc838b40b50f1fdb850143063 not found: ID does not exist" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.831804 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a832ec7-da6a-4e0b-8b74-47f2038c0c13" path="/var/lib/kubelet/pods/1a832ec7-da6a-4e0b-8b74-47f2038c0c13/volumes" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.835128 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b3c2120-6c92-4855-86fc-a08ba5b7f48c" path="/var/lib/kubelet/pods/6b3c2120-6c92-4855-86fc-a08ba5b7f48c/volumes" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.977363 5114 generic.go:358] "Generic (PLEG): container finished" podID="d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81" containerID="d21cd69980034797385ceadc9d4382c211a3bd04bb55155d46a6fd87ae8596e4" exitCode=0 Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.977490 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerDied","Data":"d21cd69980034797385ceadc9d4382c211a3bd04bb55155d46a6fd87ae8596e4"} Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.977573 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"4b328bdf4108c103f46aa6e6d715b92e23e967d9d7cbd7437de0828c64aad8e7"} Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.980069 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" event={"ID":"a6400a1e-d6ac-4457-b471-c7c6347a8a8d","Type":"ContainerStarted","Data":"0f20f4e13c190446fba52d5b3ce08bd072928639b0861fc8b228a502344ee9a7"} Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.980099 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" event={"ID":"a6400a1e-d6ac-4457-b471-c7c6347a8a8d","Type":"ContainerStarted","Data":"a969400b91da5ae15b2dd9648e832403303de2816903934f692040265b195656"} Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.987505 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jlj6_c4627438-b1a6-4cc9-85f6-10e9dd97943b/kube-multus/0.log" Feb 16 00:19:01 crc kubenswrapper[5114]: I0216 00:19:01.987771 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-5jlj6" event={"ID":"c4627438-b1a6-4cc9-85f6-10e9dd97943b","Type":"ContainerStarted","Data":"02236dabb2509c08bc0258d708c772d234518a1205de70aafe9623a27c70c868"} Feb 16 00:19:02 crc kubenswrapper[5114]: I0216 00:19:02.078720 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-rqxkl" podStartSLOduration=2.078578322 podStartE2EDuration="2.078578322s" podCreationTimestamp="2026-02-16 00:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:19:02.074816546 +0000 UTC m=+618.456093394" watchObservedRunningTime="2026-02-16 00:19:02.078578322 +0000 UTC m=+618.459855150" Feb 16 00:19:03 crc kubenswrapper[5114]: I0216 00:19:03.007612 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"9803219ccae3b61828a2f4452aa004b39af672949995832a33365bdff9737e8a"} Feb 16 00:19:03 crc kubenswrapper[5114]: I0216 00:19:03.008270 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"107bf078402ab14360322372286d91a32feb2b4a988d75834e3ec4fdf1ff24dd"} Feb 16 00:19:03 crc kubenswrapper[5114]: I0216 00:19:03.008293 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"0223d34e6842d9424ecb3be0160f6d3244383a300fc5c9f8d3887b0bb9e002ec"} Feb 16 00:19:03 crc kubenswrapper[5114]: I0216 00:19:03.008311 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"dc2047912090b097b17eded595366c03c75020aaad4ab60f5a0a208cb465ac9c"} Feb 16 00:19:03 crc kubenswrapper[5114]: I0216 00:19:03.008325 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"d62034964ef51779a29df6b8796943b14866e726df3e6d00513b4dfa19e67193"} Feb 16 00:19:04 crc kubenswrapper[5114]: I0216 00:19:04.021091 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"cbd2df19d645741895a00c16f077cadaa2764723af7377e7e706fb9eb20c91fa"} Feb 16 00:19:06 crc kubenswrapper[5114]: I0216 00:19:06.040322 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"ab1796d72d86b1311cfa205e46437ded616ae2a4b57736a32e2946b60b4e021b"} Feb 16 00:19:08 crc kubenswrapper[5114]: I0216 00:19:08.058705 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" event={"ID":"d30ff5c2-cac2-44a3-bdb0-85a68ee1bd81","Type":"ContainerStarted","Data":"f4d08999ee22f0f49297062881e7f7235c4fc0cc12cf665df0ed290f6c5cf8d8"} Feb 16 00:19:08 crc kubenswrapper[5114]: I0216 00:19:08.058970 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:08 crc kubenswrapper[5114]: I0216 00:19:08.058983 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:08 crc kubenswrapper[5114]: I0216 00:19:08.058992 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:08 crc kubenswrapper[5114]: I0216 00:19:08.086599 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:08 crc kubenswrapper[5114]: I0216 00:19:08.097890 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:08 crc kubenswrapper[5114]: I0216 00:19:08.100044 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" podStartSLOduration=8.10002338 podStartE2EDuration="8.10002338s" podCreationTimestamp="2026-02-16 00:19:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:19:08.095842553 +0000 UTC m=+624.477119381" watchObservedRunningTime="2026-02-16 00:19:08.10002338 +0000 UTC m=+624.481300198" Feb 16 00:19:20 crc kubenswrapper[5114]: I0216 00:19:20.085779 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:19:20 crc kubenswrapper[5114]: I0216 00:19:20.087306 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:19:40 crc kubenswrapper[5114]: I0216 00:19:40.110289 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ww2t5" Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.085372 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.086520 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.086601 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.087746 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e134b7537fe941db009f9833124e34b05d191a4535dab34b636141af6e8135c3"} pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.087847 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" containerID="cri-o://e134b7537fe941db009f9833124e34b05d191a4535dab34b636141af6e8135c3" gracePeriod=600 Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.432161 5114 generic.go:358] "Generic (PLEG): container finished" podID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerID="e134b7537fe941db009f9833124e34b05d191a4535dab34b636141af6e8135c3" exitCode=0 Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.432230 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerDied","Data":"e134b7537fe941db009f9833124e34b05d191a4535dab34b636141af6e8135c3"} Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.432335 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"d1dfab39c6a9f63f318ef9f1041cbb88e1fb9256dbb5157a9f49af9886d305ad"} Feb 16 00:19:50 crc kubenswrapper[5114]: I0216 00:19:50.432425 5114 scope.go:117] "RemoveContainer" containerID="8a3ce095df471cd9bc6cb7b32e5ca37c749a18ef9c74e7e6da2f540e061ab35d" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.154611 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520020-9tjzj"] Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.180203 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520020-9tjzj"] Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.180529 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.184398 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.184975 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.185332 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.314049 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdrg5\" (UniqueName: \"kubernetes.io/projected/073d01c7-0d60-496f-9be5-9c82140bf609-kube-api-access-wdrg5\") pod \"auto-csr-approver-29520020-9tjzj\" (UID: \"073d01c7-0d60-496f-9be5-9c82140bf609\") " pod="openshift-infra/auto-csr-approver-29520020-9tjzj" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.415898 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdrg5\" (UniqueName: \"kubernetes.io/projected/073d01c7-0d60-496f-9be5-9c82140bf609-kube-api-access-wdrg5\") pod \"auto-csr-approver-29520020-9tjzj\" (UID: \"073d01c7-0d60-496f-9be5-9c82140bf609\") " pod="openshift-infra/auto-csr-approver-29520020-9tjzj" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.447448 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdrg5\" (UniqueName: \"kubernetes.io/projected/073d01c7-0d60-496f-9be5-9c82140bf609-kube-api-access-wdrg5\") pod \"auto-csr-approver-29520020-9tjzj\" (UID: \"073d01c7-0d60-496f-9be5-9c82140bf609\") " pod="openshift-infra/auto-csr-approver-29520020-9tjzj" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.511147 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" Feb 16 00:20:00 crc kubenswrapper[5114]: I0216 00:20:00.788584 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520020-9tjzj"] Feb 16 00:20:01 crc kubenswrapper[5114]: I0216 00:20:01.526692 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" event={"ID":"073d01c7-0d60-496f-9be5-9c82140bf609","Type":"ContainerStarted","Data":"d09e97b9bcb9ad401fa5184b783c9e94e118656af9c9dd39cdc62954a2c6e41e"} Feb 16 00:20:02 crc kubenswrapper[5114]: I0216 00:20:02.536572 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" event={"ID":"073d01c7-0d60-496f-9be5-9c82140bf609","Type":"ContainerStarted","Data":"838b6d8aad50dbc2ebe38dd81c8c9eb52e8a766058b35e986f471084c1cff7bf"} Feb 16 00:20:02 crc kubenswrapper[5114]: I0216 00:20:02.559441 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" podStartSLOduration=1.253367181 podStartE2EDuration="2.559412253s" podCreationTimestamp="2026-02-16 00:20:00 +0000 UTC" firstStartedPulling="2026-02-16 00:20:00.798565617 +0000 UTC m=+677.179842445" lastFinishedPulling="2026-02-16 00:20:02.104610659 +0000 UTC m=+678.485887517" observedRunningTime="2026-02-16 00:20:02.553183178 +0000 UTC m=+678.934460056" watchObservedRunningTime="2026-02-16 00:20:02.559412253 +0000 UTC m=+678.940689131" Feb 16 00:20:03 crc kubenswrapper[5114]: I0216 00:20:03.547035 5114 generic.go:358] "Generic (PLEG): container finished" podID="073d01c7-0d60-496f-9be5-9c82140bf609" containerID="838b6d8aad50dbc2ebe38dd81c8c9eb52e8a766058b35e986f471084c1cff7bf" exitCode=0 Feb 16 00:20:03 crc kubenswrapper[5114]: I0216 00:20:03.547368 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" event={"ID":"073d01c7-0d60-496f-9be5-9c82140bf609","Type":"ContainerDied","Data":"838b6d8aad50dbc2ebe38dd81c8c9eb52e8a766058b35e986f471084c1cff7bf"} Feb 16 00:20:03 crc kubenswrapper[5114]: I0216 00:20:03.781485 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmc5k"] Feb 16 00:20:03 crc kubenswrapper[5114]: I0216 00:20:03.782576 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vmc5k" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerName="registry-server" containerID="cri-o://05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6" gracePeriod=30 Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.158556 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.279414 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-catalog-content\") pod \"b86a55a5-c20f-46a3-9dce-e756830b00dc\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.280062 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-utilities\") pod \"b86a55a5-c20f-46a3-9dce-e756830b00dc\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.280453 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcgtl\" (UniqueName: \"kubernetes.io/projected/b86a55a5-c20f-46a3-9dce-e756830b00dc-kube-api-access-kcgtl\") pod \"b86a55a5-c20f-46a3-9dce-e756830b00dc\" (UID: \"b86a55a5-c20f-46a3-9dce-e756830b00dc\") " Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.281388 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-utilities" (OuterVolumeSpecName: "utilities") pod "b86a55a5-c20f-46a3-9dce-e756830b00dc" (UID: "b86a55a5-c20f-46a3-9dce-e756830b00dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.293495 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b86a55a5-c20f-46a3-9dce-e756830b00dc-kube-api-access-kcgtl" (OuterVolumeSpecName: "kube-api-access-kcgtl") pod "b86a55a5-c20f-46a3-9dce-e756830b00dc" (UID: "b86a55a5-c20f-46a3-9dce-e756830b00dc"). InnerVolumeSpecName "kube-api-access-kcgtl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.307961 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b86a55a5-c20f-46a3-9dce-e756830b00dc" (UID: "b86a55a5-c20f-46a3-9dce-e756830b00dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.382650 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.383168 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b86a55a5-c20f-46a3-9dce-e756830b00dc-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.383292 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kcgtl\" (UniqueName: \"kubernetes.io/projected/b86a55a5-c20f-46a3-9dce-e756830b00dc-kube-api-access-kcgtl\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.560508 5114 generic.go:358] "Generic (PLEG): container finished" podID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerID="05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6" exitCode=0 Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.560586 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmc5k" event={"ID":"b86a55a5-c20f-46a3-9dce-e756830b00dc","Type":"ContainerDied","Data":"05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6"} Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.560677 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vmc5k" event={"ID":"b86a55a5-c20f-46a3-9dce-e756830b00dc","Type":"ContainerDied","Data":"d09db1c78956ec1af7cab1b07d0f420046de4b42564ced76a4aae1e7b6488526"} Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.560743 5114 scope.go:117] "RemoveContainer" containerID="05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.560745 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vmc5k" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.603555 5114 scope.go:117] "RemoveContainer" containerID="42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.641882 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmc5k"] Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.651086 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vmc5k"] Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.668229 5114 scope.go:117] "RemoveContainer" containerID="d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.745949 5114 scope.go:117] "RemoveContainer" containerID="05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6" Feb 16 00:20:04 crc kubenswrapper[5114]: E0216 00:20:04.746874 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6\": container with ID starting with 05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6 not found: ID does not exist" containerID="05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.746946 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6"} err="failed to get container status \"05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6\": rpc error: code = NotFound desc = could not find container \"05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6\": container with ID starting with 05fba95ec404a1168c5b7f2ebd76e74f82d18986b5dc48c602d7e8f1bcaf16e6 not found: ID does not exist" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.746981 5114 scope.go:117] "RemoveContainer" containerID="42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49" Feb 16 00:20:04 crc kubenswrapper[5114]: E0216 00:20:04.747617 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49\": container with ID starting with 42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49 not found: ID does not exist" containerID="42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.747680 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49"} err="failed to get container status \"42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49\": rpc error: code = NotFound desc = could not find container \"42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49\": container with ID starting with 42e67e8dfbc64bc14cf2f45c4fdbeeee3e1132e2b270c9c50b68d6fc84050c49 not found: ID does not exist" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.747719 5114 scope.go:117] "RemoveContainer" containerID="d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3" Feb 16 00:20:04 crc kubenswrapper[5114]: E0216 00:20:04.748315 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3\": container with ID starting with d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3 not found: ID does not exist" containerID="d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.748389 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3"} err="failed to get container status \"d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3\": rpc error: code = NotFound desc = could not find container \"d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3\": container with ID starting with d3cea8247204abb4e8622b2ad30df93035704aae3061e44ee5601acedfb28cb3 not found: ID does not exist" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.882113 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" Feb 16 00:20:04 crc kubenswrapper[5114]: I0216 00:20:04.994716 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdrg5\" (UniqueName: \"kubernetes.io/projected/073d01c7-0d60-496f-9be5-9c82140bf609-kube-api-access-wdrg5\") pod \"073d01c7-0d60-496f-9be5-9c82140bf609\" (UID: \"073d01c7-0d60-496f-9be5-9c82140bf609\") " Feb 16 00:20:05 crc kubenswrapper[5114]: I0216 00:20:05.002608 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/073d01c7-0d60-496f-9be5-9c82140bf609-kube-api-access-wdrg5" (OuterVolumeSpecName: "kube-api-access-wdrg5") pod "073d01c7-0d60-496f-9be5-9c82140bf609" (UID: "073d01c7-0d60-496f-9be5-9c82140bf609"). InnerVolumeSpecName "kube-api-access-wdrg5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:20:05 crc kubenswrapper[5114]: I0216 00:20:05.096808 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wdrg5\" (UniqueName: \"kubernetes.io/projected/073d01c7-0d60-496f-9be5-9c82140bf609-kube-api-access-wdrg5\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:05 crc kubenswrapper[5114]: I0216 00:20:05.569827 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" event={"ID":"073d01c7-0d60-496f-9be5-9c82140bf609","Type":"ContainerDied","Data":"d09e97b9bcb9ad401fa5184b783c9e94e118656af9c9dd39cdc62954a2c6e41e"} Feb 16 00:20:05 crc kubenswrapper[5114]: I0216 00:20:05.569889 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d09e97b9bcb9ad401fa5184b783c9e94e118656af9c9dd39cdc62954a2c6e41e" Feb 16 00:20:05 crc kubenswrapper[5114]: I0216 00:20:05.569970 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520020-9tjzj" Feb 16 00:20:05 crc kubenswrapper[5114]: I0216 00:20:05.827416 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" path="/var/lib/kubelet/pods/b86a55a5-c20f-46a3-9dce-e756830b00dc/volumes" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.465740 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz"] Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467136 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="073d01c7-0d60-496f-9be5-9c82140bf609" containerName="oc" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467156 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="073d01c7-0d60-496f-9be5-9c82140bf609" containerName="oc" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467183 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerName="registry-server" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467191 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerName="registry-server" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467204 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerName="extract-content" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467212 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerName="extract-content" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467263 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerName="extract-utilities" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467271 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerName="extract-utilities" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467391 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="073d01c7-0d60-496f-9be5-9c82140bf609" containerName="oc" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.467409 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="b86a55a5-c20f-46a3-9dce-e756830b00dc" containerName="registry-server" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.472146 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.477520 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.479535 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz"] Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.636415 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.636497 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.636537 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n827c\" (UniqueName: \"kubernetes.io/projected/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-kube-api-access-n827c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.738851 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.738912 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.738953 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n827c\" (UniqueName: \"kubernetes.io/projected/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-kube-api-access-n827c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.740130 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.740365 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.761812 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n827c\" (UniqueName: \"kubernetes.io/projected/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-kube-api-access-n827c\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:07 crc kubenswrapper[5114]: I0216 00:20:07.798858 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:08 crc kubenswrapper[5114]: I0216 00:20:08.032432 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz"] Feb 16 00:20:08 crc kubenswrapper[5114]: I0216 00:20:08.593430 5114 generic.go:358] "Generic (PLEG): container finished" podID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerID="1b76fc8531f5e854cd8b2ef36968a8d335d53f457aa9903108d40860d6b873b3" exitCode=0 Feb 16 00:20:08 crc kubenswrapper[5114]: I0216 00:20:08.593509 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" event={"ID":"30a1abbb-4ff1-412e-967c-bfdbe8a5468f","Type":"ContainerDied","Data":"1b76fc8531f5e854cd8b2ef36968a8d335d53f457aa9903108d40860d6b873b3"} Feb 16 00:20:08 crc kubenswrapper[5114]: I0216 00:20:08.594079 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" event={"ID":"30a1abbb-4ff1-412e-967c-bfdbe8a5468f","Type":"ContainerStarted","Data":"383f765563f044708b4726524e44cd0311b47e77814af31b18ba171c6a0b9ee4"} Feb 16 00:20:09 crc kubenswrapper[5114]: I0216 00:20:09.606052 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" event={"ID":"30a1abbb-4ff1-412e-967c-bfdbe8a5468f","Type":"ContainerStarted","Data":"ccdeb0ad309b8f6d099d14b626a435a1b4f4cb9dca20ffb34bb665ba5503f8ae"} Feb 16 00:20:10 crc kubenswrapper[5114]: I0216 00:20:10.618064 5114 generic.go:358] "Generic (PLEG): container finished" podID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerID="ccdeb0ad309b8f6d099d14b626a435a1b4f4cb9dca20ffb34bb665ba5503f8ae" exitCode=0 Feb 16 00:20:10 crc kubenswrapper[5114]: I0216 00:20:10.618226 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" event={"ID":"30a1abbb-4ff1-412e-967c-bfdbe8a5468f","Type":"ContainerDied","Data":"ccdeb0ad309b8f6d099d14b626a435a1b4f4cb9dca20ffb34bb665ba5503f8ae"} Feb 16 00:20:11 crc kubenswrapper[5114]: I0216 00:20:11.627844 5114 generic.go:358] "Generic (PLEG): container finished" podID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerID="b6a12720fdd063566d7dc63c966ec80ee5b38df6434e9ec82b977ee4f9395937" exitCode=0 Feb 16 00:20:11 crc kubenswrapper[5114]: I0216 00:20:11.627903 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" event={"ID":"30a1abbb-4ff1-412e-967c-bfdbe8a5468f","Type":"ContainerDied","Data":"b6a12720fdd063566d7dc63c966ec80ee5b38df6434e9ec82b977ee4f9395937"} Feb 16 00:20:12 crc kubenswrapper[5114]: I0216 00:20:12.900815 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.028192 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-bundle\") pod \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.028308 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n827c\" (UniqueName: \"kubernetes.io/projected/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-kube-api-access-n827c\") pod \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.028500 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-util\") pod \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\" (UID: \"30a1abbb-4ff1-412e-967c-bfdbe8a5468f\") " Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.031653 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-bundle" (OuterVolumeSpecName: "bundle") pod "30a1abbb-4ff1-412e-967c-bfdbe8a5468f" (UID: "30a1abbb-4ff1-412e-967c-bfdbe8a5468f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.038425 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-kube-api-access-n827c" (OuterVolumeSpecName: "kube-api-access-n827c") pod "30a1abbb-4ff1-412e-967c-bfdbe8a5468f" (UID: "30a1abbb-4ff1-412e-967c-bfdbe8a5468f"). InnerVolumeSpecName "kube-api-access-n827c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.047978 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-util" (OuterVolumeSpecName: "util") pod "30a1abbb-4ff1-412e-967c-bfdbe8a5468f" (UID: "30a1abbb-4ff1-412e-967c-bfdbe8a5468f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.130358 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.130418 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.130432 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n827c\" (UniqueName: \"kubernetes.io/projected/30a1abbb-4ff1-412e-967c-bfdbe8a5468f-kube-api-access-n827c\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.646355 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" event={"ID":"30a1abbb-4ff1-412e-967c-bfdbe8a5468f","Type":"ContainerDied","Data":"383f765563f044708b4726524e44cd0311b47e77814af31b18ba171c6a0b9ee4"} Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.646383 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.646408 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="383f765563f044708b4726524e44cd0311b47e77814af31b18ba171c6a0b9ee4" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.866554 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf"] Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.868019 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerName="pull" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.868049 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerName="pull" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.868074 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerName="extract" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.868084 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerName="extract" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.868107 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerName="util" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.868115 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerName="util" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.868287 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="30a1abbb-4ff1-412e-967c-bfdbe8a5468f" containerName="extract" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.878476 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.879897 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf"] Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.881773 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.939378 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.939454 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:13 crc kubenswrapper[5114]: I0216 00:20:13.939509 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddhfb\" (UniqueName: \"kubernetes.io/projected/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-kube-api-access-ddhfb\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.040297 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ddhfb\" (UniqueName: \"kubernetes.io/projected/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-kube-api-access-ddhfb\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.040405 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.040452 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.041164 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-bundle\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.041281 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-util\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.110142 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddhfb\" (UniqueName: \"kubernetes.io/projected/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-kube-api-access-ddhfb\") pod \"00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.197133 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.463562 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf"] Feb 16 00:20:14 crc kubenswrapper[5114]: W0216 00:20:14.481967 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99cc350b_a6cc_4472_afcb_96cba5c0cf4a.slice/crio-88b96629360cd475f251d8f839fe6099560c1adcf2bf86ca6c00080abbb2dfcd WatchSource:0}: Error finding container 88b96629360cd475f251d8f839fe6099560c1adcf2bf86ca6c00080abbb2dfcd: Status 404 returned error can't find the container with id 88b96629360cd475f251d8f839fe6099560c1adcf2bf86ca6c00080abbb2dfcd Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.656672 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" event={"ID":"99cc350b-a6cc-4472-afcb-96cba5c0cf4a","Type":"ContainerStarted","Data":"eaa507bab64ac915b57a4dabd68923669e350f0c8313ecb499c1955fcef25e17"} Feb 16 00:20:14 crc kubenswrapper[5114]: I0216 00:20:14.656755 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" event={"ID":"99cc350b-a6cc-4472-afcb-96cba5c0cf4a","Type":"ContainerStarted","Data":"88b96629360cd475f251d8f839fe6099560c1adcf2bf86ca6c00080abbb2dfcd"} Feb 16 00:20:15 crc kubenswrapper[5114]: I0216 00:20:15.666677 5114 generic.go:358] "Generic (PLEG): container finished" podID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerID="eaa507bab64ac915b57a4dabd68923669e350f0c8313ecb499c1955fcef25e17" exitCode=0 Feb 16 00:20:15 crc kubenswrapper[5114]: I0216 00:20:15.666775 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" event={"ID":"99cc350b-a6cc-4472-afcb-96cba5c0cf4a","Type":"ContainerDied","Data":"eaa507bab64ac915b57a4dabd68923669e350f0c8313ecb499c1955fcef25e17"} Feb 16 00:20:16 crc kubenswrapper[5114]: I0216 00:20:16.690178 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" event={"ID":"99cc350b-a6cc-4472-afcb-96cba5c0cf4a","Type":"ContainerStarted","Data":"a5c29699bdad48462bb547c462c74720b2bb116a6ede1106cb8ff2344063c85d"} Feb 16 00:20:17 crc kubenswrapper[5114]: I0216 00:20:17.699668 5114 generic.go:358] "Generic (PLEG): container finished" podID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerID="a5c29699bdad48462bb547c462c74720b2bb116a6ede1106cb8ff2344063c85d" exitCode=0 Feb 16 00:20:17 crc kubenswrapper[5114]: I0216 00:20:17.700162 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" event={"ID":"99cc350b-a6cc-4472-afcb-96cba5c0cf4a","Type":"ContainerDied","Data":"a5c29699bdad48462bb547c462c74720b2bb116a6ede1106cb8ff2344063c85d"} Feb 16 00:20:18 crc kubenswrapper[5114]: I0216 00:20:18.710880 5114 generic.go:358] "Generic (PLEG): container finished" podID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerID="31c265937c5839edc262bf18a16ec5bd952f5c3398a115c1b84d7a3ff43d7137" exitCode=0 Feb 16 00:20:18 crc kubenswrapper[5114]: I0216 00:20:18.710989 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" event={"ID":"99cc350b-a6cc-4472-afcb-96cba5c0cf4a","Type":"ContainerDied","Data":"31c265937c5839edc262bf18a16ec5bd952f5c3398a115c1b84d7a3ff43d7137"} Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.381718 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p"] Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.387784 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.413700 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p"] Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.436277 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hs7w\" (UniqueName: \"kubernetes.io/projected/02025ac3-beca-451a-8036-70876e1f2439-kube-api-access-5hs7w\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.436333 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.436423 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.538575 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.538738 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5hs7w\" (UniqueName: \"kubernetes.io/projected/02025ac3-beca-451a-8036-70876e1f2439-kube-api-access-5hs7w\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.539556 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.539570 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.540285 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.566856 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hs7w\" (UniqueName: \"kubernetes.io/projected/02025ac3-beca-451a-8036-70876e1f2439-kube-api-access-5hs7w\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:19 crc kubenswrapper[5114]: I0216 00:20:19.702453 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.154882 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.254875 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddhfb\" (UniqueName: \"kubernetes.io/projected/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-kube-api-access-ddhfb\") pod \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.254956 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-bundle\") pod \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.255094 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-util\") pod \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\" (UID: \"99cc350b-a6cc-4472-afcb-96cba5c0cf4a\") " Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.256423 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-bundle" (OuterVolumeSpecName: "bundle") pod "99cc350b-a6cc-4472-afcb-96cba5c0cf4a" (UID: "99cc350b-a6cc-4472-afcb-96cba5c0cf4a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.272775 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-kube-api-access-ddhfb" (OuterVolumeSpecName: "kube-api-access-ddhfb") pod "99cc350b-a6cc-4472-afcb-96cba5c0cf4a" (UID: "99cc350b-a6cc-4472-afcb-96cba5c0cf4a"). InnerVolumeSpecName "kube-api-access-ddhfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.357123 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddhfb\" (UniqueName: \"kubernetes.io/projected/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-kube-api-access-ddhfb\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.357179 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.395447 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-util" (OuterVolumeSpecName: "util") pod "99cc350b-a6cc-4472-afcb-96cba5c0cf4a" (UID: "99cc350b-a6cc-4472-afcb-96cba5c0cf4a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.459450 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/99cc350b-a6cc-4472-afcb-96cba5c0cf4a-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.461407 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p"] Feb 16 00:20:20 crc kubenswrapper[5114]: W0216 00:20:20.463733 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02025ac3_beca_451a_8036_70876e1f2439.slice/crio-3c4086062ede8cf6149fcd99bbbd627930808678c99b4ce7ef82667f52063675 WatchSource:0}: Error finding container 3c4086062ede8cf6149fcd99bbbd627930808678c99b4ce7ef82667f52063675: Status 404 returned error can't find the container with id 3c4086062ede8cf6149fcd99bbbd627930808678c99b4ce7ef82667f52063675 Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.733530 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" event={"ID":"02025ac3-beca-451a-8036-70876e1f2439","Type":"ContainerStarted","Data":"b3b3a48f831b0c8077a3ce33eb9eaa86c65171c29f58b043eda581aaad852ef7"} Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.734091 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" event={"ID":"02025ac3-beca-451a-8036-70876e1f2439","Type":"ContainerStarted","Data":"3c4086062ede8cf6149fcd99bbbd627930808678c99b4ce7ef82667f52063675"} Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.737478 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" event={"ID":"99cc350b-a6cc-4472-afcb-96cba5c0cf4a","Type":"ContainerDied","Data":"88b96629360cd475f251d8f839fe6099560c1adcf2bf86ca6c00080abbb2dfcd"} Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.737517 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88b96629360cd475f251d8f839fe6099560c1adcf2bf86ca6c00080abbb2dfcd" Feb 16 00:20:20 crc kubenswrapper[5114]: I0216 00:20:20.737589 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf" Feb 16 00:20:21 crc kubenswrapper[5114]: I0216 00:20:21.747352 5114 generic.go:358] "Generic (PLEG): container finished" podID="02025ac3-beca-451a-8036-70876e1f2439" containerID="b3b3a48f831b0c8077a3ce33eb9eaa86c65171c29f58b043eda581aaad852ef7" exitCode=0 Feb 16 00:20:21 crc kubenswrapper[5114]: I0216 00:20:21.747483 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" event={"ID":"02025ac3-beca-451a-8036-70876e1f2439","Type":"ContainerDied","Data":"b3b3a48f831b0c8077a3ce33eb9eaa86c65171c29f58b043eda581aaad852ef7"} Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.221182 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.222434 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerName="extract" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.222450 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerName="extract" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.222467 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerName="pull" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.222473 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerName="pull" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.222502 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerName="util" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.222509 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerName="util" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.222622 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="99cc350b-a6cc-4472-afcb-96cba5c0cf4a" containerName="extract" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.261965 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.262209 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.267162 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.267594 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-xgxds\"" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.267737 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.338851 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nktmw\" (UniqueName: \"kubernetes.io/projected/13bfd2d1-3c0a-4fc6-a84b-45f3459195b0-kube-api-access-nktmw\") pod \"obo-prometheus-operator-9bc85b4bf-phf92\" (UID: \"13bfd2d1-3c0a-4fc6-a84b-45f3459195b0\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.359819 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.369130 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.382185 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-8h4vj\"" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.414147 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.418049 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.429865 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.430156 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.444783 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c3c704d-2d95-41a5-9189-83392c97240e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-lrffd\" (UID: \"3c3c704d-2d95-41a5-9189-83392c97240e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.444875 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nktmw\" (UniqueName: \"kubernetes.io/projected/13bfd2d1-3c0a-4fc6-a84b-45f3459195b0-kube-api-access-nktmw\") pod \"obo-prometheus-operator-9bc85b4bf-phf92\" (UID: \"13bfd2d1-3c0a-4fc6-a84b-45f3459195b0\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.444928 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c3c704d-2d95-41a5-9189-83392c97240e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-lrffd\" (UID: \"3c3c704d-2d95-41a5-9189-83392c97240e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.477343 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.530542 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nktmw\" (UniqueName: \"kubernetes.io/projected/13bfd2d1-3c0a-4fc6-a84b-45f3459195b0-kube-api-access-nktmw\") pod \"obo-prometheus-operator-9bc85b4bf-phf92\" (UID: \"13bfd2d1-3c0a-4fc6-a84b-45f3459195b0\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.556683 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c3c704d-2d95-41a5-9189-83392c97240e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-lrffd\" (UID: \"3c3c704d-2d95-41a5-9189-83392c97240e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.557073 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb532db4-78a1-465f-8c41-ba9de05d7349-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-gsmjz\" (UID: \"eb532db4-78a1-465f-8c41-ba9de05d7349\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.557098 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb532db4-78a1-465f-8c41-ba9de05d7349-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-gsmjz\" (UID: \"eb532db4-78a1-465f-8c41-ba9de05d7349\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.557135 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c3c704d-2d95-41a5-9189-83392c97240e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-lrffd\" (UID: \"3c3c704d-2d95-41a5-9189-83392c97240e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.582743 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.583206 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3c3c704d-2d95-41a5-9189-83392c97240e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-lrffd\" (UID: \"3c3c704d-2d95-41a5-9189-83392c97240e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.583323 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3c3c704d-2d95-41a5-9189-83392c97240e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-lrffd\" (UID: \"3c3c704d-2d95-41a5-9189-83392c97240e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.658036 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb532db4-78a1-465f-8c41-ba9de05d7349-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-gsmjz\" (UID: \"eb532db4-78a1-465f-8c41-ba9de05d7349\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.658359 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb532db4-78a1-465f-8c41-ba9de05d7349-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-gsmjz\" (UID: \"eb532db4-78a1-465f-8c41-ba9de05d7349\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.686019 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb532db4-78a1-465f-8c41-ba9de05d7349-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-gsmjz\" (UID: \"eb532db4-78a1-465f-8c41-ba9de05d7349\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.688595 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.699859 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/eb532db4-78a1-465f-8c41-ba9de05d7349-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6db558659d-gsmjz\" (UID: \"eb532db4-78a1-465f-8c41-ba9de05d7349\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.743779 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-c8p8q"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.759205 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.763678 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.763941 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-bcq7d\"" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.770344 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.797211 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-c8p8q"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.863163 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lhvd\" (UniqueName: \"kubernetes.io/projected/76206e1f-dcb7-4b06-9980-7bfb8c3c9b02-kube-api-access-9lhvd\") pod \"observability-operator-85c68dddb-c8p8q\" (UID: \"76206e1f-dcb7-4b06-9980-7bfb8c3c9b02\") " pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.863351 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/76206e1f-dcb7-4b06-9980-7bfb8c3c9b02-observability-operator-tls\") pod \"observability-operator-85c68dddb-c8p8q\" (UID: \"76206e1f-dcb7-4b06-9980-7bfb8c3c9b02\") " pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.943370 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-hqf4v"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.955345 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.957931 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-hqf4v"] Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.963663 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-khd8z\"" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.964532 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lhvd\" (UniqueName: \"kubernetes.io/projected/76206e1f-dcb7-4b06-9980-7bfb8c3c9b02-kube-api-access-9lhvd\") pod \"observability-operator-85c68dddb-c8p8q\" (UID: \"76206e1f-dcb7-4b06-9980-7bfb8c3c9b02\") " pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.964677 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/76206e1f-dcb7-4b06-9980-7bfb8c3c9b02-observability-operator-tls\") pod \"observability-operator-85c68dddb-c8p8q\" (UID: \"76206e1f-dcb7-4b06-9980-7bfb8c3c9b02\") " pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.969344 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/76206e1f-dcb7-4b06-9980-7bfb8c3c9b02-observability-operator-tls\") pod \"observability-operator-85c68dddb-c8p8q\" (UID: \"76206e1f-dcb7-4b06-9980-7bfb8c3c9b02\") " pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:25 crc kubenswrapper[5114]: I0216 00:20:25.995547 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lhvd\" (UniqueName: \"kubernetes.io/projected/76206e1f-dcb7-4b06-9980-7bfb8c3c9b02-kube-api-access-9lhvd\") pod \"observability-operator-85c68dddb-c8p8q\" (UID: \"76206e1f-dcb7-4b06-9980-7bfb8c3c9b02\") " pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:26 crc kubenswrapper[5114]: I0216 00:20:26.066350 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825rr\" (UniqueName: \"kubernetes.io/projected/24f609a1-7bb0-432e-951d-c23dc581bc81-kube-api-access-825rr\") pod \"perses-operator-669c9f96b5-hqf4v\" (UID: \"24f609a1-7bb0-432e-951d-c23dc581bc81\") " pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:26 crc kubenswrapper[5114]: I0216 00:20:26.066550 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/24f609a1-7bb0-432e-951d-c23dc581bc81-openshift-service-ca\") pod \"perses-operator-669c9f96b5-hqf4v\" (UID: \"24f609a1-7bb0-432e-951d-c23dc581bc81\") " pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:26 crc kubenswrapper[5114]: I0216 00:20:26.084975 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:26 crc kubenswrapper[5114]: I0216 00:20:26.168684 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-825rr\" (UniqueName: \"kubernetes.io/projected/24f609a1-7bb0-432e-951d-c23dc581bc81-kube-api-access-825rr\") pod \"perses-operator-669c9f96b5-hqf4v\" (UID: \"24f609a1-7bb0-432e-951d-c23dc581bc81\") " pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:26 crc kubenswrapper[5114]: I0216 00:20:26.168891 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/24f609a1-7bb0-432e-951d-c23dc581bc81-openshift-service-ca\") pod \"perses-operator-669c9f96b5-hqf4v\" (UID: \"24f609a1-7bb0-432e-951d-c23dc581bc81\") " pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:26 crc kubenswrapper[5114]: I0216 00:20:26.170026 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/24f609a1-7bb0-432e-951d-c23dc581bc81-openshift-service-ca\") pod \"perses-operator-669c9f96b5-hqf4v\" (UID: \"24f609a1-7bb0-432e-951d-c23dc581bc81\") " pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:26 crc kubenswrapper[5114]: I0216 00:20:26.192696 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-825rr\" (UniqueName: \"kubernetes.io/projected/24f609a1-7bb0-432e-951d-c23dc581bc81-kube-api-access-825rr\") pod \"perses-operator-669c9f96b5-hqf4v\" (UID: \"24f609a1-7bb0-432e-951d-c23dc581bc81\") " pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:26 crc kubenswrapper[5114]: I0216 00:20:26.289735 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.141080 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd"] Feb 16 00:20:27 crc kubenswrapper[5114]: W0216 00:20:27.148976 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3c3c704d_2d95_41a5_9189_83392c97240e.slice/crio-f9b81a0e37f97f90760a85afbe4a7918a473417fa41a0fb7cb0518322e549701 WatchSource:0}: Error finding container f9b81a0e37f97f90760a85afbe4a7918a473417fa41a0fb7cb0518322e549701: Status 404 returned error can't find the container with id f9b81a0e37f97f90760a85afbe4a7918a473417fa41a0fb7cb0518322e549701 Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.215139 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-c8p8q"] Feb 16 00:20:27 crc kubenswrapper[5114]: W0216 00:20:27.458040 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24f609a1_7bb0_432e_951d_c23dc581bc81.slice/crio-cd734e5bea28474528f3e6688b9e72ea8a0b814e6c35efb534facfe933259ebe WatchSource:0}: Error finding container cd734e5bea28474528f3e6688b9e72ea8a0b814e6c35efb534facfe933259ebe: Status 404 returned error can't find the container with id cd734e5bea28474528f3e6688b9e72ea8a0b814e6c35efb534facfe933259ebe Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.460347 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-hqf4v"] Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.495471 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz"] Feb 16 00:20:27 crc kubenswrapper[5114]: W0216 00:20:27.506382 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13bfd2d1_3c0a_4fc6_a84b_45f3459195b0.slice/crio-1587bdf66536ff9dc91cb82cd3148fc7ef32af0809b8241cd01d0af09ef2c3af WatchSource:0}: Error finding container 1587bdf66536ff9dc91cb82cd3148fc7ef32af0809b8241cd01d0af09ef2c3af: Status 404 returned error can't find the container with id 1587bdf66536ff9dc91cb82cd3148fc7ef32af0809b8241cd01d0af09ef2c3af Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.514056 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92"] Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.812462 5114 generic.go:358] "Generic (PLEG): container finished" podID="02025ac3-beca-451a-8036-70876e1f2439" containerID="7784ee2722439dc2c4e83c63e78c625ff318296db944c302ba3cf59f32654bbd" exitCode=0 Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.812904 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" event={"ID":"02025ac3-beca-451a-8036-70876e1f2439","Type":"ContainerDied","Data":"7784ee2722439dc2c4e83c63e78c625ff318296db944c302ba3cf59f32654bbd"} Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.815932 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92" event={"ID":"13bfd2d1-3c0a-4fc6-a84b-45f3459195b0","Type":"ContainerStarted","Data":"1587bdf66536ff9dc91cb82cd3148fc7ef32af0809b8241cd01d0af09ef2c3af"} Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.824729 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" event={"ID":"eb532db4-78a1-465f-8c41-ba9de05d7349","Type":"ContainerStarted","Data":"8cb505a5d0a0bde212de68f74bfe547068aa28c0a5e1045d06190f4b55065a5d"} Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.824774 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" event={"ID":"3c3c704d-2d95-41a5-9189-83392c97240e","Type":"ContainerStarted","Data":"f9b81a0e37f97f90760a85afbe4a7918a473417fa41a0fb7cb0518322e549701"} Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.824788 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-c8p8q" event={"ID":"76206e1f-dcb7-4b06-9980-7bfb8c3c9b02","Type":"ContainerStarted","Data":"205f4895ce6a2d3e39f3d4d0f6475a50706b417902a39e0a0118d22d52406bfb"} Feb 16 00:20:27 crc kubenswrapper[5114]: I0216 00:20:27.824993 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" event={"ID":"24f609a1-7bb0-432e-951d-c23dc581bc81","Type":"ContainerStarted","Data":"cd734e5bea28474528f3e6688b9e72ea8a0b814e6c35efb534facfe933259ebe"} Feb 16 00:20:28 crc kubenswrapper[5114]: I0216 00:20:28.856185 5114 generic.go:358] "Generic (PLEG): container finished" podID="02025ac3-beca-451a-8036-70876e1f2439" containerID="80cab09183af3223bd045486abf76d6c1c2c443bfd4b2c999814b44ceef64b46" exitCode=0 Feb 16 00:20:28 crc kubenswrapper[5114]: I0216 00:20:28.857599 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" event={"ID":"02025ac3-beca-451a-8036-70876e1f2439","Type":"ContainerDied","Data":"80cab09183af3223bd045486abf76d6c1c2c443bfd4b2c999814b44ceef64b46"} Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.011414 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6d7489bfd6-856tc"] Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.016890 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.018035 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6d7489bfd6-856tc"] Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.020130 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.020869 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.022977 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-sh29r\"" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.023928 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.144185 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e827cfb-c6eb-4961-9016-ffe16f28f66c-webhook-cert\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.144386 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e827cfb-c6eb-4961-9016-ffe16f28f66c-apiservice-cert\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.144445 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l569q\" (UniqueName: \"kubernetes.io/projected/4e827cfb-c6eb-4961-9016-ffe16f28f66c-kube-api-access-l569q\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.250347 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e827cfb-c6eb-4961-9016-ffe16f28f66c-apiservice-cert\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.250414 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l569q\" (UniqueName: \"kubernetes.io/projected/4e827cfb-c6eb-4961-9016-ffe16f28f66c-kube-api-access-l569q\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.250506 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e827cfb-c6eb-4961-9016-ffe16f28f66c-webhook-cert\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.257976 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4e827cfb-c6eb-4961-9016-ffe16f28f66c-webhook-cert\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.258072 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4e827cfb-c6eb-4961-9016-ffe16f28f66c-apiservice-cert\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.326630 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l569q\" (UniqueName: \"kubernetes.io/projected/4e827cfb-c6eb-4961-9016-ffe16f28f66c-kube-api-access-l569q\") pod \"elastic-operator-6d7489bfd6-856tc\" (UID: \"4e827cfb-c6eb-4961-9016-ffe16f28f66c\") " pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:29 crc kubenswrapper[5114]: I0216 00:20:29.371444 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.216454 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6d7489bfd6-856tc"] Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.382229 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.477017 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hs7w\" (UniqueName: \"kubernetes.io/projected/02025ac3-beca-451a-8036-70876e1f2439-kube-api-access-5hs7w\") pod \"02025ac3-beca-451a-8036-70876e1f2439\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.477278 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-util\") pod \"02025ac3-beca-451a-8036-70876e1f2439\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.477343 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-bundle\") pod \"02025ac3-beca-451a-8036-70876e1f2439\" (UID: \"02025ac3-beca-451a-8036-70876e1f2439\") " Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.478845 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-bundle" (OuterVolumeSpecName: "bundle") pod "02025ac3-beca-451a-8036-70876e1f2439" (UID: "02025ac3-beca-451a-8036-70876e1f2439"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.479072 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.491966 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-util" (OuterVolumeSpecName: "util") pod "02025ac3-beca-451a-8036-70876e1f2439" (UID: "02025ac3-beca-451a-8036-70876e1f2439"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.504539 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02025ac3-beca-451a-8036-70876e1f2439-kube-api-access-5hs7w" (OuterVolumeSpecName: "kube-api-access-5hs7w") pod "02025ac3-beca-451a-8036-70876e1f2439" (UID: "02025ac3-beca-451a-8036-70876e1f2439"). InnerVolumeSpecName "kube-api-access-5hs7w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.581627 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/02025ac3-beca-451a-8036-70876e1f2439-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.581740 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5hs7w\" (UniqueName: \"kubernetes.io/projected/02025ac3-beca-451a-8036-70876e1f2439-kube-api-access-5hs7w\") on node \"crc\" DevicePath \"\"" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.913474 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" event={"ID":"4e827cfb-c6eb-4961-9016-ffe16f28f66c","Type":"ContainerStarted","Data":"47246ba1f4d6649b47511cc85847b92c4e5b17055ad5a1d358baf4f803a2239b"} Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.917858 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" event={"ID":"02025ac3-beca-451a-8036-70876e1f2439","Type":"ContainerDied","Data":"3c4086062ede8cf6149fcd99bbbd627930808678c99b4ce7ef82667f52063675"} Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.917900 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c4086062ede8cf6149fcd99bbbd627930808678c99b4ce7ef82667f52063675" Feb 16 00:20:30 crc kubenswrapper[5114]: I0216 00:20:30.918076 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.038510 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" event={"ID":"eb532db4-78a1-465f-8c41-ba9de05d7349","Type":"ContainerStarted","Data":"ceaad5c849a5c9b7b29365cef8c32c705eb17bec6281b37363a6cd294d98a86f"} Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.041976 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" event={"ID":"3c3c704d-2d95-41a5-9189-83392c97240e","Type":"ContainerStarted","Data":"17e000fdd4c25c02e8a65155748f13e77c610c97eedaa160357f5d842bdd2491"} Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.044594 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-c8p8q" event={"ID":"76206e1f-dcb7-4b06-9980-7bfb8c3c9b02","Type":"ContainerStarted","Data":"384c473d18c8238d2d7d98cee607a099ecc58fc42ac01c324a5fe2ab7732f490"} Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.044807 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.046907 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" event={"ID":"4e827cfb-c6eb-4961-9016-ffe16f28f66c","Type":"ContainerStarted","Data":"ffea7307db2f4df347e9ac1c82d8bf202fb7c05ab3b5815235279294c3d4fccb"} Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.048971 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" event={"ID":"24f609a1-7bb0-432e-951d-c23dc581bc81","Type":"ContainerStarted","Data":"bd38f55721df0ac3272b851424ef447be6a28744db19e77dae8f538b5f9dd96a"} Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.049167 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.051185 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-c8p8q" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.051746 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92" event={"ID":"13bfd2d1-3c0a-4fc6-a84b-45f3459195b0","Type":"ContainerStarted","Data":"b6749c1ccd13dec89e1d70537132b4c26442001f6b9662158261104e482ceff3"} Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.067206 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-gsmjz" podStartSLOduration=3.114243858 podStartE2EDuration="17.067171909s" podCreationTimestamp="2026-02-16 00:20:25 +0000 UTC" firstStartedPulling="2026-02-16 00:20:27.496811158 +0000 UTC m=+703.878087966" lastFinishedPulling="2026-02-16 00:20:41.449739199 +0000 UTC m=+717.831016017" observedRunningTime="2026-02-16 00:20:42.062489187 +0000 UTC m=+718.443766005" watchObservedRunningTime="2026-02-16 00:20:42.067171909 +0000 UTC m=+718.448448727" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.092153 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-c8p8q" podStartSLOduration=2.831851222 podStartE2EDuration="17.09211785s" podCreationTimestamp="2026-02-16 00:20:25 +0000 UTC" firstStartedPulling="2026-02-16 00:20:27.224936419 +0000 UTC m=+703.606213237" lastFinishedPulling="2026-02-16 00:20:41.485203047 +0000 UTC m=+717.866479865" observedRunningTime="2026-02-16 00:20:42.086568444 +0000 UTC m=+718.467845272" watchObservedRunningTime="2026-02-16 00:20:42.09211785 +0000 UTC m=+718.473394668" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.123978 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6d7489bfd6-856tc" podStartSLOduration=2.9802953739999998 podStartE2EDuration="14.123948416s" podCreationTimestamp="2026-02-16 00:20:28 +0000 UTC" firstStartedPulling="2026-02-16 00:20:30.307390834 +0000 UTC m=+706.688667652" lastFinishedPulling="2026-02-16 00:20:41.451043876 +0000 UTC m=+717.832320694" observedRunningTime="2026-02-16 00:20:42.123763021 +0000 UTC m=+718.505039859" watchObservedRunningTime="2026-02-16 00:20:42.123948416 +0000 UTC m=+718.505225234" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.156551 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" podStartSLOduration=3.167170836 podStartE2EDuration="17.156526622s" podCreationTimestamp="2026-02-16 00:20:25 +0000 UTC" firstStartedPulling="2026-02-16 00:20:27.461812893 +0000 UTC m=+703.843089711" lastFinishedPulling="2026-02-16 00:20:41.451168679 +0000 UTC m=+717.832445497" observedRunningTime="2026-02-16 00:20:42.152503369 +0000 UTC m=+718.533780197" watchObservedRunningTime="2026-02-16 00:20:42.156526622 +0000 UTC m=+718.537803440" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.188344 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-phf92" podStartSLOduration=3.264420832 podStartE2EDuration="17.188307036s" podCreationTimestamp="2026-02-16 00:20:25 +0000 UTC" firstStartedPulling="2026-02-16 00:20:27.525863205 +0000 UTC m=+703.907140023" lastFinishedPulling="2026-02-16 00:20:41.449749409 +0000 UTC m=+717.831026227" observedRunningTime="2026-02-16 00:20:42.177921944 +0000 UTC m=+718.559198762" watchObservedRunningTime="2026-02-16 00:20:42.188307036 +0000 UTC m=+718.569583854" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.231562 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6db558659d-lrffd" podStartSLOduration=2.918604903 podStartE2EDuration="17.231535432s" podCreationTimestamp="2026-02-16 00:20:25 +0000 UTC" firstStartedPulling="2026-02-16 00:20:27.154434546 +0000 UTC m=+703.535711364" lastFinishedPulling="2026-02-16 00:20:41.467365075 +0000 UTC m=+717.848641893" observedRunningTime="2026-02-16 00:20:42.221966453 +0000 UTC m=+718.603243271" watchObservedRunningTime="2026-02-16 00:20:42.231535432 +0000 UTC m=+718.612812250" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.314933 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc"] Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.316359 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02025ac3-beca-451a-8036-70876e1f2439" containerName="extract" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.316381 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="02025ac3-beca-451a-8036-70876e1f2439" containerName="extract" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.316397 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02025ac3-beca-451a-8036-70876e1f2439" containerName="pull" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.316403 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="02025ac3-beca-451a-8036-70876e1f2439" containerName="pull" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.316410 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="02025ac3-beca-451a-8036-70876e1f2439" containerName="util" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.316417 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="02025ac3-beca-451a-8036-70876e1f2439" containerName="util" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.316548 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="02025ac3-beca-451a-8036-70876e1f2439" containerName="extract" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.324992 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.332279 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-bzd4k\"" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.332934 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.333280 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.368925 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc"] Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.408969 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48f98fe5-1b28-4403-9f94-a1525ac4c93f-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-9kdhc\" (UID: \"48f98fe5-1b28-4403-9f94-a1525ac4c93f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.409028 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b749p\" (UniqueName: \"kubernetes.io/projected/48f98fe5-1b28-4403-9f94-a1525ac4c93f-kube-api-access-b749p\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-9kdhc\" (UID: \"48f98fe5-1b28-4403-9f94-a1525ac4c93f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.511102 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48f98fe5-1b28-4403-9f94-a1525ac4c93f-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-9kdhc\" (UID: \"48f98fe5-1b28-4403-9f94-a1525ac4c93f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.511553 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b749p\" (UniqueName: \"kubernetes.io/projected/48f98fe5-1b28-4403-9f94-a1525ac4c93f-kube-api-access-b749p\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-9kdhc\" (UID: \"48f98fe5-1b28-4403-9f94-a1525ac4c93f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.511795 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/48f98fe5-1b28-4403-9f94-a1525ac4c93f-tmp\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-9kdhc\" (UID: \"48f98fe5-1b28-4403-9f94-a1525ac4c93f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.542596 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b749p\" (UniqueName: \"kubernetes.io/projected/48f98fe5-1b28-4403-9f94-a1525ac4c93f-kube-api-access-b749p\") pod \"cert-manager-operator-controller-manager-7c5b8bd68-9kdhc\" (UID: \"48f98fe5-1b28-4403-9f94-a1525ac4c93f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" Feb 16 00:20:42 crc kubenswrapper[5114]: I0216 00:20:42.645759 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" Feb 16 00:20:43 crc kubenswrapper[5114]: I0216 00:20:43.165755 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc"] Feb 16 00:20:43 crc kubenswrapper[5114]: W0216 00:20:43.173404 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48f98fe5_1b28_4403_9f94_a1525ac4c93f.slice/crio-a8192d46774210126bddf699f5ef00f8d95a033af2edc49ef5c2af25e98ce02b WatchSource:0}: Error finding container a8192d46774210126bddf699f5ef00f8d95a033af2edc49ef5c2af25e98ce02b: Status 404 returned error can't find the container with id a8192d46774210126bddf699f5ef00f8d95a033af2edc49ef5c2af25e98ce02b Feb 16 00:20:44 crc kubenswrapper[5114]: I0216 00:20:44.071396 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" event={"ID":"48f98fe5-1b28-4403-9f94-a1525ac4c93f","Type":"ContainerStarted","Data":"a8192d46774210126bddf699f5ef00f8d95a033af2edc49ef5c2af25e98ce02b"} Feb 16 00:20:45 crc kubenswrapper[5114]: I0216 00:20:45.864271 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.017906 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.018190 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.020798 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-sqb4j\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.022781 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.022948 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.023079 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.023194 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.023459 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.023534 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.023665 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.026896 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.169764 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.169806 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.169842 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.169869 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.169886 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170036 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170085 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170146 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/85e5d57a-83dc-4ddd-9268-29b9441ba077-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170212 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170272 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170299 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170422 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170493 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170522 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.170547 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.274723 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.274783 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.274816 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/85e5d57a-83dc-4ddd-9268-29b9441ba077-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275045 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275148 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275176 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275214 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275268 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275296 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275310 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275658 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275760 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.275886 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.276703 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.276707 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.276782 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.276868 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.276938 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.276966 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.277403 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.276455 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.278057 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.289456 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.290262 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.290350 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.291458 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.292059 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.293007 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.293474 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/85e5d57a-83dc-4ddd-9268-29b9441ba077-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.309432 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/85e5d57a-83dc-4ddd-9268-29b9441ba077-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"85e5d57a-83dc-4ddd-9268-29b9441ba077\") " pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:46 crc kubenswrapper[5114]: I0216 00:20:46.343357 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:20:49 crc kubenswrapper[5114]: I0216 00:20:49.387826 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:20:49 crc kubenswrapper[5114]: W0216 00:20:49.428478 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85e5d57a_83dc_4ddd_9268_29b9441ba077.slice/crio-fc5721601c6b5a8cce5db9aa251f3d2d519c0900328a19ec6aed9780f04b07cb WatchSource:0}: Error finding container fc5721601c6b5a8cce5db9aa251f3d2d519c0900328a19ec6aed9780f04b07cb: Status 404 returned error can't find the container with id fc5721601c6b5a8cce5db9aa251f3d2d519c0900328a19ec6aed9780f04b07cb Feb 16 00:20:50 crc kubenswrapper[5114]: I0216 00:20:50.115778 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"85e5d57a-83dc-4ddd-9268-29b9441ba077","Type":"ContainerStarted","Data":"fc5721601c6b5a8cce5db9aa251f3d2d519c0900328a19ec6aed9780f04b07cb"} Feb 16 00:20:50 crc kubenswrapper[5114]: I0216 00:20:50.117826 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" event={"ID":"48f98fe5-1b28-4403-9f94-a1525ac4c93f","Type":"ContainerStarted","Data":"070a21df1cdfc315a333c078cf3a0e0486cbd5fd18e7741243ebce92de77fe8e"} Feb 16 00:20:50 crc kubenswrapper[5114]: I0216 00:20:50.143949 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-7c5b8bd68-9kdhc" podStartSLOduration=1.975708279 podStartE2EDuration="8.143920372s" podCreationTimestamp="2026-02-16 00:20:42 +0000 UTC" firstStartedPulling="2026-02-16 00:20:43.176595749 +0000 UTC m=+719.557872567" lastFinishedPulling="2026-02-16 00:20:49.344807852 +0000 UTC m=+725.726084660" observedRunningTime="2026-02-16 00:20:50.140744783 +0000 UTC m=+726.522021591" watchObservedRunningTime="2026-02-16 00:20:50.143920372 +0000 UTC m=+726.525197190" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.061597 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-hqf4v" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.173958 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-l6nlj"] Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.199973 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-l6nlj"] Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.200133 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.209917 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.212889 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-w8ghs\"" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.215462 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.311766 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mq9m\" (UniqueName: \"kubernetes.io/projected/c9da52c1-a5fa-4758-a23e-eb1ac46f02c6-kube-api-access-9mq9m\") pod \"cert-manager-webhook-597b96b99b-l6nlj\" (UID: \"c9da52c1-a5fa-4758-a23e-eb1ac46f02c6\") " pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.311834 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9da52c1-a5fa-4758-a23e-eb1ac46f02c6-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-l6nlj\" (UID: \"c9da52c1-a5fa-4758-a23e-eb1ac46f02c6\") " pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.413201 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9mq9m\" (UniqueName: \"kubernetes.io/projected/c9da52c1-a5fa-4758-a23e-eb1ac46f02c6-kube-api-access-9mq9m\") pod \"cert-manager-webhook-597b96b99b-l6nlj\" (UID: \"c9da52c1-a5fa-4758-a23e-eb1ac46f02c6\") " pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.413477 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9da52c1-a5fa-4758-a23e-eb1ac46f02c6-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-l6nlj\" (UID: \"c9da52c1-a5fa-4758-a23e-eb1ac46f02c6\") " pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.435910 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mq9m\" (UniqueName: \"kubernetes.io/projected/c9da52c1-a5fa-4758-a23e-eb1ac46f02c6-kube-api-access-9mq9m\") pod \"cert-manager-webhook-597b96b99b-l6nlj\" (UID: \"c9da52c1-a5fa-4758-a23e-eb1ac46f02c6\") " pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.445129 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c9da52c1-a5fa-4758-a23e-eb1ac46f02c6-bound-sa-token\") pod \"cert-manager-webhook-597b96b99b-l6nlj\" (UID: \"c9da52c1-a5fa-4758-a23e-eb1ac46f02c6\") " pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:20:53 crc kubenswrapper[5114]: I0216 00:20:53.517449 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:20:55 crc kubenswrapper[5114]: I0216 00:20:55.521594 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-597b96b99b-l6nlj"] Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.176325 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" event={"ID":"c9da52c1-a5fa-4758-a23e-eb1ac46f02c6","Type":"ContainerStarted","Data":"85f41c1da6cbf69e9fb88704b27abd99937b6e225f0b800fe8f30b716e52258f"} Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.310787 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-dhgdr"] Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.322915 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-dhgdr"] Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.323134 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.325540 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-bv27d\"" Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.472644 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpktq\" (UniqueName: \"kubernetes.io/projected/9f9a21b1-a399-465f-975c-22782affdbe7-kube-api-access-gpktq\") pod \"cert-manager-cainjector-8966b78d4-dhgdr\" (UID: \"9f9a21b1-a399-465f-975c-22782affdbe7\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.472965 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f9a21b1-a399-465f-975c-22782affdbe7-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-dhgdr\" (UID: \"9f9a21b1-a399-465f-975c-22782affdbe7\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.574654 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f9a21b1-a399-465f-975c-22782affdbe7-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-dhgdr\" (UID: \"9f9a21b1-a399-465f-975c-22782affdbe7\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.574772 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gpktq\" (UniqueName: \"kubernetes.io/projected/9f9a21b1-a399-465f-975c-22782affdbe7-kube-api-access-gpktq\") pod \"cert-manager-cainjector-8966b78d4-dhgdr\" (UID: \"9f9a21b1-a399-465f-975c-22782affdbe7\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.599045 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpktq\" (UniqueName: \"kubernetes.io/projected/9f9a21b1-a399-465f-975c-22782affdbe7-kube-api-access-gpktq\") pod \"cert-manager-cainjector-8966b78d4-dhgdr\" (UID: \"9f9a21b1-a399-465f-975c-22782affdbe7\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.600095 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f9a21b1-a399-465f-975c-22782affdbe7-bound-sa-token\") pod \"cert-manager-cainjector-8966b78d4-dhgdr\" (UID: \"9f9a21b1-a399-465f-975c-22782affdbe7\") " pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" Feb 16 00:20:56 crc kubenswrapper[5114]: I0216 00:20:56.647028 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" Feb 16 00:20:57 crc kubenswrapper[5114]: I0216 00:20:57.083462 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-8966b78d4-dhgdr"] Feb 16 00:20:57 crc kubenswrapper[5114]: W0216 00:20:57.106855 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f9a21b1_a399_465f_975c_22782affdbe7.slice/crio-4a5c6bfcb3a477b921f002dcb16ce5dc6193b8bfb2e8bc7ca96ee63b6d675ac4 WatchSource:0}: Error finding container 4a5c6bfcb3a477b921f002dcb16ce5dc6193b8bfb2e8bc7ca96ee63b6d675ac4: Status 404 returned error can't find the container with id 4a5c6bfcb3a477b921f002dcb16ce5dc6193b8bfb2e8bc7ca96ee63b6d675ac4 Feb 16 00:20:57 crc kubenswrapper[5114]: I0216 00:20:57.187689 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" event={"ID":"9f9a21b1-a399-465f-975c-22782affdbe7","Type":"ContainerStarted","Data":"4a5c6bfcb3a477b921f002dcb16ce5dc6193b8bfb2e8bc7ca96ee63b6d675ac4"} Feb 16 00:21:06 crc kubenswrapper[5114]: I0216 00:21:06.353185 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"85e5d57a-83dc-4ddd-9268-29b9441ba077","Type":"ContainerStarted","Data":"a28c3e4ad3578e245aff1ad1bb2dacfd90e54e37e36e586583f54102aa5ee4d9"} Feb 16 00:21:06 crc kubenswrapper[5114]: I0216 00:21:06.367756 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" event={"ID":"c9da52c1-a5fa-4758-a23e-eb1ac46f02c6","Type":"ContainerStarted","Data":"d6b4d9d75eeebe729578fe47b67144a2a998584b483fb8bd967133d7af9f62b3"} Feb 16 00:21:06 crc kubenswrapper[5114]: I0216 00:21:06.368228 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:21:06 crc kubenswrapper[5114]: I0216 00:21:06.371328 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" event={"ID":"9f9a21b1-a399-465f-975c-22782affdbe7","Type":"ContainerStarted","Data":"df44487558d0891c138d44fbed2e2a4f64c24ea9099ed7fadb3a9b3399859f62"} Feb 16 00:21:06 crc kubenswrapper[5114]: I0216 00:21:06.411203 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" podStartSLOduration=3.051631857 podStartE2EDuration="13.4111818s" podCreationTimestamp="2026-02-16 00:20:53 +0000 UTC" firstStartedPulling="2026-02-16 00:20:55.539470729 +0000 UTC m=+731.920747547" lastFinishedPulling="2026-02-16 00:21:05.899020662 +0000 UTC m=+742.280297490" observedRunningTime="2026-02-16 00:21:06.405602553 +0000 UTC m=+742.786879371" watchObservedRunningTime="2026-02-16 00:21:06.4111818 +0000 UTC m=+742.792458618" Feb 16 00:21:06 crc kubenswrapper[5114]: I0216 00:21:06.425904 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-8966b78d4-dhgdr" podStartSLOduration=1.636471872 podStartE2EDuration="10.425882353s" podCreationTimestamp="2026-02-16 00:20:56 +0000 UTC" firstStartedPulling="2026-02-16 00:20:57.110424613 +0000 UTC m=+733.491701421" lastFinishedPulling="2026-02-16 00:21:05.899835084 +0000 UTC m=+742.281111902" observedRunningTime="2026-02-16 00:21:06.424740761 +0000 UTC m=+742.806017579" watchObservedRunningTime="2026-02-16 00:21:06.425882353 +0000 UTC m=+742.807159171" Feb 16 00:21:06 crc kubenswrapper[5114]: I0216 00:21:06.471522 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:21:06 crc kubenswrapper[5114]: I0216 00:21:06.513061 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Feb 16 00:21:08 crc kubenswrapper[5114]: I0216 00:21:08.388063 5114 generic.go:358] "Generic (PLEG): container finished" podID="85e5d57a-83dc-4ddd-9268-29b9441ba077" containerID="a28c3e4ad3578e245aff1ad1bb2dacfd90e54e37e36e586583f54102aa5ee4d9" exitCode=0 Feb 16 00:21:08 crc kubenswrapper[5114]: I0216 00:21:08.388200 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"85e5d57a-83dc-4ddd-9268-29b9441ba077","Type":"ContainerDied","Data":"a28c3e4ad3578e245aff1ad1bb2dacfd90e54e37e36e586583f54102aa5ee4d9"} Feb 16 00:21:09 crc kubenswrapper[5114]: I0216 00:21:09.399971 5114 generic.go:358] "Generic (PLEG): container finished" podID="85e5d57a-83dc-4ddd-9268-29b9441ba077" containerID="37800bdbcef037ba213365b76a1fc65a06630e7e1eb13055a91b42998f01bcca" exitCode=0 Feb 16 00:21:09 crc kubenswrapper[5114]: I0216 00:21:09.400050 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"85e5d57a-83dc-4ddd-9268-29b9441ba077","Type":"ContainerDied","Data":"37800bdbcef037ba213365b76a1fc65a06630e7e1eb13055a91b42998f01bcca"} Feb 16 00:21:10 crc kubenswrapper[5114]: I0216 00:21:10.410718 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"85e5d57a-83dc-4ddd-9268-29b9441ba077","Type":"ContainerStarted","Data":"7d82eed14d48cf97a0db681dd6745f0c68f303652772e79226131c36ce907c96"} Feb 16 00:21:10 crc kubenswrapper[5114]: I0216 00:21:10.416475 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:21:10 crc kubenswrapper[5114]: I0216 00:21:10.464298 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=8.858307575 podStartE2EDuration="25.464278261s" podCreationTimestamp="2026-02-16 00:20:45 +0000 UTC" firstStartedPulling="2026-02-16 00:20:49.44180575 +0000 UTC m=+725.823082568" lastFinishedPulling="2026-02-16 00:21:06.047776436 +0000 UTC m=+742.429053254" observedRunningTime="2026-02-16 00:21:10.459215958 +0000 UTC m=+746.840492786" watchObservedRunningTime="2026-02-16 00:21:10.464278261 +0000 UTC m=+746.845555089" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.134134 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-759f64656b-zq6b6"] Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.150135 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-zq6b6" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.155447 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-kln5t\"" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.159177 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-zq6b6"] Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.263313 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8-bound-sa-token\") pod \"cert-manager-759f64656b-zq6b6\" (UID: \"c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8\") " pod="cert-manager/cert-manager-759f64656b-zq6b6" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.263941 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x5dm\" (UniqueName: \"kubernetes.io/projected/c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8-kube-api-access-7x5dm\") pod \"cert-manager-759f64656b-zq6b6\" (UID: \"c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8\") " pod="cert-manager/cert-manager-759f64656b-zq6b6" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.365963 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7x5dm\" (UniqueName: \"kubernetes.io/projected/c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8-kube-api-access-7x5dm\") pod \"cert-manager-759f64656b-zq6b6\" (UID: \"c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8\") " pod="cert-manager/cert-manager-759f64656b-zq6b6" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.366123 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8-bound-sa-token\") pod \"cert-manager-759f64656b-zq6b6\" (UID: \"c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8\") " pod="cert-manager/cert-manager-759f64656b-zq6b6" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.382155 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-597b96b99b-l6nlj" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.415015 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8-bound-sa-token\") pod \"cert-manager-759f64656b-zq6b6\" (UID: \"c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8\") " pod="cert-manager/cert-manager-759f64656b-zq6b6" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.425270 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x5dm\" (UniqueName: \"kubernetes.io/projected/c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8-kube-api-access-7x5dm\") pod \"cert-manager-759f64656b-zq6b6\" (UID: \"c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8\") " pod="cert-manager/cert-manager-759f64656b-zq6b6" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.470660 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-759f64656b-zq6b6" Feb 16 00:21:12 crc kubenswrapper[5114]: I0216 00:21:12.728226 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-759f64656b-zq6b6"] Feb 16 00:21:13 crc kubenswrapper[5114]: I0216 00:21:13.444314 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-zq6b6" event={"ID":"c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8","Type":"ContainerStarted","Data":"15407c755e60069f44d3a53b01a618a4dec19d80278a463a49e806b75f32efa7"} Feb 16 00:21:13 crc kubenswrapper[5114]: I0216 00:21:13.444402 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-759f64656b-zq6b6" event={"ID":"c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8","Type":"ContainerStarted","Data":"1e728e99fbdd805d6984aea8134ee8bced123b9ab707c67e1b1242a1a658b414"} Feb 16 00:21:13 crc kubenswrapper[5114]: I0216 00:21:13.467674 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-759f64656b-zq6b6" podStartSLOduration=1.467646861 podStartE2EDuration="1.467646861s" podCreationTimestamp="2026-02-16 00:21:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:21:13.462446935 +0000 UTC m=+749.843723753" watchObservedRunningTime="2026-02-16 00:21:13.467646861 +0000 UTC m=+749.848923709" Feb 16 00:21:22 crc kubenswrapper[5114]: I0216 00:21:22.548501 5114 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="85e5d57a-83dc-4ddd-9268-29b9441ba077" containerName="elasticsearch" probeResult="failure" output=< Feb 16 00:21:22 crc kubenswrapper[5114]: {"timestamp": "2026-02-16T00:21:22+00:00", "message": "readiness probe failed", "curl_rc": "7"} Feb 16 00:21:22 crc kubenswrapper[5114]: > Feb 16 00:21:27 crc kubenswrapper[5114]: I0216 00:21:27.976048 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.623471 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.655936 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.656138 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.660486 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-catalog-configmap-partition-1\"" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.759367 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58zb\" (UniqueName: \"kubernetes.io/projected/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-kube-api-access-n58zb\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.759728 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.759821 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.861462 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.861528 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.861698 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n58zb\" (UniqueName: \"kubernetes.io/projected/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-kube-api-access-n58zb\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.862285 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-smart-gateway-operator-catalog-configmap-partition-1-unzip\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.862697 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"smart-gateway-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-smart-gateway-operator-catalog-configmap-partition-1-volume\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.904388 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58zb\" (UniqueName: \"kubernetes.io/projected/c20d07e9-7fa1-4dd5-acba-5ef31272c4f3-kube-api-access-n58zb\") pod \"infrawatch-operators-smart-gateway-operator-bundle-nightly-head\" (UID: \"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3\") " pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:40 crc kubenswrapper[5114]: I0216 00:21:40.982817 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" Feb 16 00:21:41 crc kubenswrapper[5114]: I0216 00:21:41.237332 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head"] Feb 16 00:21:41 crc kubenswrapper[5114]: I0216 00:21:41.728294 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3","Type":"ContainerStarted","Data":"33c8f64bb908430c9611a9d56f98137eb78aabd1b2edc034dda77bcfc466c64c"} Feb 16 00:21:47 crc kubenswrapper[5114]: I0216 00:21:47.786151 5114 generic.go:358] "Generic (PLEG): container finished" podID="c20d07e9-7fa1-4dd5-acba-5ef31272c4f3" containerID="9ba9c5dd1d3649f9004b57fb4af281861f3bc557ee32f467ab9fa22bca274c58" exitCode=0 Feb 16 00:21:47 crc kubenswrapper[5114]: I0216 00:21:47.786277 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3","Type":"ContainerDied","Data":"9ba9c5dd1d3649f9004b57fb4af281861f3bc557ee32f467ab9fa22bca274c58"} Feb 16 00:21:50 crc kubenswrapper[5114]: I0216 00:21:50.084541 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:21:50 crc kubenswrapper[5114]: I0216 00:21:50.084654 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:21:50 crc kubenswrapper[5114]: I0216 00:21:50.826809 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" event={"ID":"c20d07e9-7fa1-4dd5-acba-5ef31272c4f3","Type":"ContainerStarted","Data":"3d1f751e0a9413f204b44ca807547807fac192ff563fd78c3063cbef76f9e72f"} Feb 16 00:21:50 crc kubenswrapper[5114]: I0216 00:21:50.848821 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-smart-gateway-operator-bundle-nightly-head" podStartSLOduration=1.49703905 podStartE2EDuration="10.848795942s" podCreationTimestamp="2026-02-16 00:21:40 +0000 UTC" firstStartedPulling="2026-02-16 00:21:41.238406493 +0000 UTC m=+777.619683301" lastFinishedPulling="2026-02-16 00:21:50.590163375 +0000 UTC m=+786.971440193" observedRunningTime="2026-02-16 00:21:50.847101014 +0000 UTC m=+787.228377862" watchObservedRunningTime="2026-02-16 00:21:50.848795942 +0000 UTC m=+787.230072810" Feb 16 00:21:51 crc kubenswrapper[5114]: I0216 00:21:51.901130 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft"] Feb 16 00:21:51 crc kubenswrapper[5114]: I0216 00:21:51.910971 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:51 crc kubenswrapper[5114]: I0216 00:21:51.917471 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft"] Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.051854 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l622q\" (UniqueName: \"kubernetes.io/projected/11c17cc8-9a21-4f1b-b526-b08d7c96e169-kube-api-access-l622q\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.051944 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.051998 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.153750 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l622q\" (UniqueName: \"kubernetes.io/projected/11c17cc8-9a21-4f1b-b526-b08d7c96e169-kube-api-access-l622q\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.153863 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.153939 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.154776 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-bundle\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.155343 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-util\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.201001 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l622q\" (UniqueName: \"kubernetes.io/projected/11c17cc8-9a21-4f1b-b526-b08d7c96e169-kube-api-access-l622q\") pod \"581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.240705 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.523149 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft"] Feb 16 00:21:52 crc kubenswrapper[5114]: W0216 00:21:52.546401 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11c17cc8_9a21_4f1b_b526_b08d7c96e169.slice/crio-c00036c59ea82a29a018a4a3e8476af2db41e9e9fef3ca5d0ee077abbeb9ce52 WatchSource:0}: Error finding container c00036c59ea82a29a018a4a3e8476af2db41e9e9fef3ca5d0ee077abbeb9ce52: Status 404 returned error can't find the container with id c00036c59ea82a29a018a4a3e8476af2db41e9e9fef3ca5d0ee077abbeb9ce52 Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.854616 5114 generic.go:358] "Generic (PLEG): container finished" podID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerID="e85fa97ce2945f75c7b3cb391f42037f16e9e5a70797b48fe9079c985f0ca2ad" exitCode=0 Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.854699 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" event={"ID":"11c17cc8-9a21-4f1b-b526-b08d7c96e169","Type":"ContainerDied","Data":"e85fa97ce2945f75c7b3cb391f42037f16e9e5a70797b48fe9079c985f0ca2ad"} Feb 16 00:21:52 crc kubenswrapper[5114]: I0216 00:21:52.855037 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" event={"ID":"11c17cc8-9a21-4f1b-b526-b08d7c96e169","Type":"ContainerStarted","Data":"c00036c59ea82a29a018a4a3e8476af2db41e9e9fef3ca5d0ee077abbeb9ce52"} Feb 16 00:21:53 crc kubenswrapper[5114]: I0216 00:21:53.870220 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" event={"ID":"11c17cc8-9a21-4f1b-b526-b08d7c96e169","Type":"ContainerStarted","Data":"bd8085dc3aca3207c6e1005d6f23cb902992a8f029b1c818ef4fb47be4e0d9d9"} Feb 16 00:21:54 crc kubenswrapper[5114]: I0216 00:21:54.883906 5114 generic.go:358] "Generic (PLEG): container finished" podID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerID="bd8085dc3aca3207c6e1005d6f23cb902992a8f029b1c818ef4fb47be4e0d9d9" exitCode=0 Feb 16 00:21:54 crc kubenswrapper[5114]: I0216 00:21:54.884043 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" event={"ID":"11c17cc8-9a21-4f1b-b526-b08d7c96e169","Type":"ContainerDied","Data":"bd8085dc3aca3207c6e1005d6f23cb902992a8f029b1c818ef4fb47be4e0d9d9"} Feb 16 00:21:55 crc kubenswrapper[5114]: I0216 00:21:55.896990 5114 generic.go:358] "Generic (PLEG): container finished" podID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerID="ec70a4c85a24dacca271755d87379a33000f3287facd3f7de43219d85671b1dd" exitCode=0 Feb 16 00:21:55 crc kubenswrapper[5114]: I0216 00:21:55.897130 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" event={"ID":"11c17cc8-9a21-4f1b-b526-b08d7c96e169","Type":"ContainerDied","Data":"ec70a4c85a24dacca271755d87379a33000f3287facd3f7de43219d85671b1dd"} Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.265597 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.390973 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l622q\" (UniqueName: \"kubernetes.io/projected/11c17cc8-9a21-4f1b-b526-b08d7c96e169-kube-api-access-l622q\") pod \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.391193 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-util\") pod \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.391331 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-bundle\") pod \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\" (UID: \"11c17cc8-9a21-4f1b-b526-b08d7c96e169\") " Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.393131 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-bundle" (OuterVolumeSpecName: "bundle") pod "11c17cc8-9a21-4f1b-b526-b08d7c96e169" (UID: "11c17cc8-9a21-4f1b-b526-b08d7c96e169"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.403641 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11c17cc8-9a21-4f1b-b526-b08d7c96e169-kube-api-access-l622q" (OuterVolumeSpecName: "kube-api-access-l622q") pod "11c17cc8-9a21-4f1b-b526-b08d7c96e169" (UID: "11c17cc8-9a21-4f1b-b526-b08d7c96e169"). InnerVolumeSpecName "kube-api-access-l622q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.405278 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-util" (OuterVolumeSpecName: "util") pod "11c17cc8-9a21-4f1b-b526-b08d7c96e169" (UID: "11c17cc8-9a21-4f1b-b526-b08d7c96e169"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.493341 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l622q\" (UniqueName: \"kubernetes.io/projected/11c17cc8-9a21-4f1b-b526-b08d7c96e169-kube-api-access-l622q\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.493999 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.494087 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/11c17cc8-9a21-4f1b-b526-b08d7c96e169-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.917368 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" event={"ID":"11c17cc8-9a21-4f1b-b526-b08d7c96e169","Type":"ContainerDied","Data":"c00036c59ea82a29a018a4a3e8476af2db41e9e9fef3ca5d0ee077abbeb9ce52"} Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.917470 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c00036c59ea82a29a018a4a3e8476af2db41e9e9fef3ca5d0ee077abbeb9ce52" Feb 16 00:21:57 crc kubenswrapper[5114]: I0216 00:21:57.917473 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/581064c273eeb770c9fbc3e03ee675cb542f06b12d97607b3aad97666174qft" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.148456 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520022-kwsf6"] Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.149707 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerName="util" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.149732 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerName="util" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.149759 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerName="pull" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.149768 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerName="pull" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.149803 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerName="extract" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.149810 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerName="extract" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.149937 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="11c17cc8-9a21-4f1b-b526-b08d7c96e169" containerName="extract" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.154527 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520022-kwsf6" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.159155 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.159797 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.160043 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.161529 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520022-kwsf6"] Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.247667 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrf4j\" (UniqueName: \"kubernetes.io/projected/a185ceb8-ad6c-4fc0-8cce-72142ea846d8-kube-api-access-nrf4j\") pod \"auto-csr-approver-29520022-kwsf6\" (UID: \"a185ceb8-ad6c-4fc0-8cce-72142ea846d8\") " pod="openshift-infra/auto-csr-approver-29520022-kwsf6" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.349372 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nrf4j\" (UniqueName: \"kubernetes.io/projected/a185ceb8-ad6c-4fc0-8cce-72142ea846d8-kube-api-access-nrf4j\") pod \"auto-csr-approver-29520022-kwsf6\" (UID: \"a185ceb8-ad6c-4fc0-8cce-72142ea846d8\") " pod="openshift-infra/auto-csr-approver-29520022-kwsf6" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.386231 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrf4j\" (UniqueName: \"kubernetes.io/projected/a185ceb8-ad6c-4fc0-8cce-72142ea846d8-kube-api-access-nrf4j\") pod \"auto-csr-approver-29520022-kwsf6\" (UID: \"a185ceb8-ad6c-4fc0-8cce-72142ea846d8\") " pod="openshift-infra/auto-csr-approver-29520022-kwsf6" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.527934 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520022-kwsf6" Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.767312 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520022-kwsf6"] Feb 16 00:22:00 crc kubenswrapper[5114]: I0216 00:22:00.944566 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520022-kwsf6" event={"ID":"a185ceb8-ad6c-4fc0-8cce-72142ea846d8","Type":"ContainerStarted","Data":"817e5534cd216b0c8e44f02187b7ebed933386effe7a4dcab8ea6242cf4a4051"} Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.527267 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-76hjx"] Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.535192 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.538852 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-vv2l7\"" Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.543442 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-76hjx"] Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.668737 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/18677532-f05d-4f6d-bd9f-5ff26cdd64c8-runner\") pod \"smart-gateway-operator-97b85656c-76hjx\" (UID: \"18677532-f05d-4f6d-bd9f-5ff26cdd64c8\") " pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.668964 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s2gq\" (UniqueName: \"kubernetes.io/projected/18677532-f05d-4f6d-bd9f-5ff26cdd64c8-kube-api-access-5s2gq\") pod \"smart-gateway-operator-97b85656c-76hjx\" (UID: \"18677532-f05d-4f6d-bd9f-5ff26cdd64c8\") " pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.770350 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5s2gq\" (UniqueName: \"kubernetes.io/projected/18677532-f05d-4f6d-bd9f-5ff26cdd64c8-kube-api-access-5s2gq\") pod \"smart-gateway-operator-97b85656c-76hjx\" (UID: \"18677532-f05d-4f6d-bd9f-5ff26cdd64c8\") " pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.770464 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/18677532-f05d-4f6d-bd9f-5ff26cdd64c8-runner\") pod \"smart-gateway-operator-97b85656c-76hjx\" (UID: \"18677532-f05d-4f6d-bd9f-5ff26cdd64c8\") " pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.771123 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/18677532-f05d-4f6d-bd9f-5ff26cdd64c8-runner\") pod \"smart-gateway-operator-97b85656c-76hjx\" (UID: \"18677532-f05d-4f6d-bd9f-5ff26cdd64c8\") " pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.804811 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s2gq\" (UniqueName: \"kubernetes.io/projected/18677532-f05d-4f6d-bd9f-5ff26cdd64c8-kube-api-access-5s2gq\") pod \"smart-gateway-operator-97b85656c-76hjx\" (UID: \"18677532-f05d-4f6d-bd9f-5ff26cdd64c8\") " pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" Feb 16 00:22:01 crc kubenswrapper[5114]: I0216 00:22:01.858338 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" Feb 16 00:22:02 crc kubenswrapper[5114]: I0216 00:22:02.109149 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-97b85656c-76hjx"] Feb 16 00:22:02 crc kubenswrapper[5114]: W0216 00:22:02.115904 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18677532_f05d_4f6d_bd9f_5ff26cdd64c8.slice/crio-7f6fb1517188a86182e136fad75e8a5a2eb3da6479821b7d12276ff4b9b69ff9 WatchSource:0}: Error finding container 7f6fb1517188a86182e136fad75e8a5a2eb3da6479821b7d12276ff4b9b69ff9: Status 404 returned error can't find the container with id 7f6fb1517188a86182e136fad75e8a5a2eb3da6479821b7d12276ff4b9b69ff9 Feb 16 00:22:02 crc kubenswrapper[5114]: I0216 00:22:02.966294 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" event={"ID":"18677532-f05d-4f6d-bd9f-5ff26cdd64c8","Type":"ContainerStarted","Data":"7f6fb1517188a86182e136fad75e8a5a2eb3da6479821b7d12276ff4b9b69ff9"} Feb 16 00:22:02 crc kubenswrapper[5114]: I0216 00:22:02.969656 5114 generic.go:358] "Generic (PLEG): container finished" podID="a185ceb8-ad6c-4fc0-8cce-72142ea846d8" containerID="b0603adaa5b6469cbce9719c0194dbc76b099fb9ad70da1138307a7301b2ae4b" exitCode=0 Feb 16 00:22:02 crc kubenswrapper[5114]: I0216 00:22:02.969903 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520022-kwsf6" event={"ID":"a185ceb8-ad6c-4fc0-8cce-72142ea846d8","Type":"ContainerDied","Data":"b0603adaa5b6469cbce9719c0194dbc76b099fb9ad70da1138307a7301b2ae4b"} Feb 16 00:22:04 crc kubenswrapper[5114]: I0216 00:22:04.305965 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520022-kwsf6" Feb 16 00:22:04 crc kubenswrapper[5114]: I0216 00:22:04.416856 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrf4j\" (UniqueName: \"kubernetes.io/projected/a185ceb8-ad6c-4fc0-8cce-72142ea846d8-kube-api-access-nrf4j\") pod \"a185ceb8-ad6c-4fc0-8cce-72142ea846d8\" (UID: \"a185ceb8-ad6c-4fc0-8cce-72142ea846d8\") " Feb 16 00:22:04 crc kubenswrapper[5114]: I0216 00:22:04.428090 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a185ceb8-ad6c-4fc0-8cce-72142ea846d8-kube-api-access-nrf4j" (OuterVolumeSpecName: "kube-api-access-nrf4j") pod "a185ceb8-ad6c-4fc0-8cce-72142ea846d8" (UID: "a185ceb8-ad6c-4fc0-8cce-72142ea846d8"). InnerVolumeSpecName "kube-api-access-nrf4j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:22:04 crc kubenswrapper[5114]: I0216 00:22:04.519689 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nrf4j\" (UniqueName: \"kubernetes.io/projected/a185ceb8-ad6c-4fc0-8cce-72142ea846d8-kube-api-access-nrf4j\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:04 crc kubenswrapper[5114]: I0216 00:22:04.989398 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520022-kwsf6" Feb 16 00:22:04 crc kubenswrapper[5114]: I0216 00:22:04.989435 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520022-kwsf6" event={"ID":"a185ceb8-ad6c-4fc0-8cce-72142ea846d8","Type":"ContainerDied","Data":"817e5534cd216b0c8e44f02187b7ebed933386effe7a4dcab8ea6242cf4a4051"} Feb 16 00:22:04 crc kubenswrapper[5114]: I0216 00:22:04.989496 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="817e5534cd216b0c8e44f02187b7ebed933386effe7a4dcab8ea6242cf4a4051" Feb 16 00:22:05 crc kubenswrapper[5114]: I0216 00:22:05.376963 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29520016-jfhs6"] Feb 16 00:22:05 crc kubenswrapper[5114]: I0216 00:22:05.384773 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29520016-jfhs6"] Feb 16 00:22:05 crc kubenswrapper[5114]: I0216 00:22:05.827144 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="932e8fef-e1b4-4e9c-a29d-5460a6497aa3" path="/var/lib/kubelet/pods/932e8fef-e1b4-4e9c-a29d-5460a6497aa3/volumes" Feb 16 00:22:18 crc kubenswrapper[5114]: I0216 00:22:18.111214 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" event={"ID":"18677532-f05d-4f6d-bd9f-5ff26cdd64c8","Type":"ContainerStarted","Data":"9c696cd71b7d812948662aaeb5b70ad0cd28a895b986b9b7f9220801fa92e998"} Feb 16 00:22:18 crc kubenswrapper[5114]: I0216 00:22:18.138171 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-97b85656c-76hjx" podStartSLOduration=1.5437981650000001 podStartE2EDuration="17.138142753s" podCreationTimestamp="2026-02-16 00:22:01 +0000 UTC" firstStartedPulling="2026-02-16 00:22:02.119113846 +0000 UTC m=+798.500390664" lastFinishedPulling="2026-02-16 00:22:17.713458394 +0000 UTC m=+814.094735252" observedRunningTime="2026-02-16 00:22:18.136413974 +0000 UTC m=+814.517690822" watchObservedRunningTime="2026-02-16 00:22:18.138142753 +0000 UTC m=+814.519419581" Feb 16 00:22:20 crc kubenswrapper[5114]: I0216 00:22:20.085682 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:22:20 crc kubenswrapper[5114]: I0216 00:22:20.086213 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.237382 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.239227 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a185ceb8-ad6c-4fc0-8cce-72142ea846d8" containerName="oc" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.239327 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a185ceb8-ad6c-4fc0-8cce-72142ea846d8" containerName="oc" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.239604 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="a185ceb8-ad6c-4fc0-8cce-72142ea846d8" containerName="oc" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.250360 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.252759 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.259604 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-catalog-configmap-partition-1\"" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.359370 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/24a8e455-2771-4f16-a960-431459c5661b-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.359487 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/24a8e455-2771-4f16-a960-431459c5661b-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.359599 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkn8b\" (UniqueName: \"kubernetes.io/projected/24a8e455-2771-4f16-a960-431459c5661b-kube-api-access-pkn8b\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.461876 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/24a8e455-2771-4f16-a960-431459c5661b-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.461982 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pkn8b\" (UniqueName: \"kubernetes.io/projected/24a8e455-2771-4f16-a960-431459c5661b-kube-api-access-pkn8b\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.462070 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/24a8e455-2771-4f16-a960-431459c5661b-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.462613 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-unzip\" (UniqueName: \"kubernetes.io/empty-dir/24a8e455-2771-4f16-a960-431459c5661b-service-telemetry-operator-catalog-configmap-partition-1-unzip\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.463115 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-operator-catalog-configmap-partition-1-volume\" (UniqueName: \"kubernetes.io/configmap/24a8e455-2771-4f16-a960-431459c5661b-service-telemetry-operator-catalog-configmap-partition-1-volume\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.493755 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkn8b\" (UniqueName: \"kubernetes.io/projected/24a8e455-2771-4f16-a960-431459c5661b-kube-api-access-pkn8b\") pod \"awatch-operators-service-telemetry-operator-bundle-nightly-head\" (UID: \"24a8e455-2771-4f16-a960-431459c5661b\") " pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.580488 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" Feb 16 00:22:40 crc kubenswrapper[5114]: I0216 00:22:40.824836 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head"] Feb 16 00:22:41 crc kubenswrapper[5114]: I0216 00:22:41.325186 5114 generic.go:358] "Generic (PLEG): container finished" podID="24a8e455-2771-4f16-a960-431459c5661b" containerID="4e10b435f4fd6e00c09af04acd87474c64c0e908c4724ac69e489b3f2e9dc58f" exitCode=0 Feb 16 00:22:41 crc kubenswrapper[5114]: I0216 00:22:41.325330 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"24a8e455-2771-4f16-a960-431459c5661b","Type":"ContainerDied","Data":"4e10b435f4fd6e00c09af04acd87474c64c0e908c4724ac69e489b3f2e9dc58f"} Feb 16 00:22:41 crc kubenswrapper[5114]: I0216 00:22:41.326020 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"24a8e455-2771-4f16-a960-431459c5661b","Type":"ContainerStarted","Data":"dc78aa435d115b1bed90b8e9fe72abe60a853f9b40520e19f8c1460afd97f790"} Feb 16 00:22:43 crc kubenswrapper[5114]: I0216 00:22:43.346446 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" event={"ID":"24a8e455-2771-4f16-a960-431459c5661b","Type":"ContainerStarted","Data":"937f8bc69475fe27af26579906d81266a485b25db7e30d32e1d828564176fb22"} Feb 16 00:22:43 crc kubenswrapper[5114]: I0216 00:22:43.377699 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/awatch-operators-service-telemetry-operator-bundle-nightly-head" podStartSLOduration=2.516504031 podStartE2EDuration="3.377676671s" podCreationTimestamp="2026-02-16 00:22:40 +0000 UTC" firstStartedPulling="2026-02-16 00:22:41.329187454 +0000 UTC m=+837.710464282" lastFinishedPulling="2026-02-16 00:22:42.190360094 +0000 UTC m=+838.571636922" observedRunningTime="2026-02-16 00:22:43.371197119 +0000 UTC m=+839.752473977" watchObservedRunningTime="2026-02-16 00:22:43.377676671 +0000 UTC m=+839.758953529" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.465475 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt"] Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.528708 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.533740 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt"] Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.561980 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.562038 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn9qc\" (UniqueName: \"kubernetes.io/projected/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-kube-api-access-fn9qc\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.562367 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.663983 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.664106 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.664142 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fn9qc\" (UniqueName: \"kubernetes.io/projected/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-kube-api-access-fn9qc\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.664752 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-util\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.664802 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-bundle\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.696265 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn9qc\" (UniqueName: \"kubernetes.io/projected/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-kube-api-access-fn9qc\") pod \"59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:45 crc kubenswrapper[5114]: I0216 00:22:45.850745 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.072621 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj"] Feb 16 00:22:46 crc kubenswrapper[5114]: W0216 00:22:46.100029 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5ec4fa2_571c_4a06_b7b7_26ceb4d84f3e.slice/crio-1ad155161c7cd545514b7ff122f0b360076265c95b4ee125480f922a7bfa7823 WatchSource:0}: Error finding container 1ad155161c7cd545514b7ff122f0b360076265c95b4ee125480f922a7bfa7823: Status 404 returned error can't find the container with id 1ad155161c7cd545514b7ff122f0b360076265c95b4ee125480f922a7bfa7823 Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.379842 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj"] Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.380545 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt"] Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.380563 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" event={"ID":"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e","Type":"ContainerStarted","Data":"1ad155161c7cd545514b7ff122f0b360076265c95b4ee125480f922a7bfa7823"} Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.380095 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.382805 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.478771 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.478863 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25mdw\" (UniqueName: \"kubernetes.io/projected/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-kube-api-access-25mdw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.479000 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.581479 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.581603 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25mdw\" (UniqueName: \"kubernetes.io/projected/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-kube-api-access-25mdw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.581644 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.582222 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.582348 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.606646 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25mdw\" (UniqueName: \"kubernetes.io/projected/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-kube-api-access-25mdw\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:46 crc kubenswrapper[5114]: I0216 00:22:46.721098 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:47 crc kubenswrapper[5114]: I0216 00:22:47.210033 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj"] Feb 16 00:22:47 crc kubenswrapper[5114]: I0216 00:22:47.384392 5114 generic.go:358] "Generic (PLEG): container finished" podID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerID="aee04b68d4f9c7a7cbc80a924bbe27764a8634f8b304e3da972fcc63b1ebc793" exitCode=0 Feb 16 00:22:47 crc kubenswrapper[5114]: I0216 00:22:47.384496 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" event={"ID":"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e","Type":"ContainerDied","Data":"aee04b68d4f9c7a7cbc80a924bbe27764a8634f8b304e3da972fcc63b1ebc793"} Feb 16 00:22:47 crc kubenswrapper[5114]: I0216 00:22:47.389974 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" event={"ID":"6fa0aab5-e3d3-4cb3-8409-296dcc548f30","Type":"ContainerStarted","Data":"7c8ba0a252d8e269cee55cb8474db8d48c5cda01943411c9e491401464ea030b"} Feb 16 00:22:48 crc kubenswrapper[5114]: I0216 00:22:48.401630 5114 generic.go:358] "Generic (PLEG): container finished" podID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerID="103ed9788cd5dab5fb356fce27abb7dd18bd2e2feff5a5f32e75e642788cf92e" exitCode=0 Feb 16 00:22:48 crc kubenswrapper[5114]: I0216 00:22:48.401779 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" event={"ID":"6fa0aab5-e3d3-4cb3-8409-296dcc548f30","Type":"ContainerDied","Data":"103ed9788cd5dab5fb356fce27abb7dd18bd2e2feff5a5f32e75e642788cf92e"} Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.155751 5114 scope.go:117] "RemoveContainer" containerID="8df397633cead5b193b3d652bf4ed302a5acdefe24ccc6fa92c11bc346083e71" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.542774 5114 generic.go:358] "Generic (PLEG): container finished" podID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerID="f3029e1fe28ebd90bb9dcf27b6807cfca9c0751bccf91db9a6e8a1ed4ef85489" exitCode=0 Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.543014 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" event={"ID":"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e","Type":"ContainerDied","Data":"f3029e1fe28ebd90bb9dcf27b6807cfca9c0751bccf91db9a6e8a1ed4ef85489"} Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.626738 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zjdm8"] Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.638459 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.664336 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zjdm8"] Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.839465 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-utilities\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.839526 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-catalog-content\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.839721 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p2cq\" (UniqueName: \"kubernetes.io/projected/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-kube-api-access-5p2cq\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.942183 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-catalog-content\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.942414 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5p2cq\" (UniqueName: \"kubernetes.io/projected/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-kube-api-access-5p2cq\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.942874 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-utilities\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.943376 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-catalog-content\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.943515 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-utilities\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.966566 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p2cq\" (UniqueName: \"kubernetes.io/projected/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-kube-api-access-5p2cq\") pod \"redhat-operators-zjdm8\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:49 crc kubenswrapper[5114]: I0216 00:22:49.983431 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.087379 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.087685 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.087744 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.088490 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d1dfab39c6a9f63f318ef9f1041cbb88e1fb9256dbb5157a9f49af9886d305ad"} pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.088569 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" containerID="cri-o://d1dfab39c6a9f63f318ef9f1041cbb88e1fb9256dbb5157a9f49af9886d305ad" gracePeriod=600 Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.452715 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zjdm8"] Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.566450 5114 generic.go:358] "Generic (PLEG): container finished" podID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerID="d1dfab39c6a9f63f318ef9f1041cbb88e1fb9256dbb5157a9f49af9886d305ad" exitCode=0 Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.566533 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerDied","Data":"d1dfab39c6a9f63f318ef9f1041cbb88e1fb9256dbb5157a9f49af9886d305ad"} Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.566934 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"08e121677631f460690080580c06d5b5374b81d3fbafdd43ec22ad0e68333766"} Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.567010 5114 scope.go:117] "RemoveContainer" containerID="e134b7537fe941db009f9833124e34b05d191a4535dab34b636141af6e8135c3" Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.580279 5114 generic.go:358] "Generic (PLEG): container finished" podID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerID="5cf242f0a121f2cedba8ae3f4926b9656bfea7752f5ff493f1534ac6f9f541c8" exitCode=0 Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.580380 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" event={"ID":"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e","Type":"ContainerDied","Data":"5cf242f0a121f2cedba8ae3f4926b9656bfea7752f5ff493f1534ac6f9f541c8"} Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.584355 5114 generic.go:358] "Generic (PLEG): container finished" podID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerID="7e7e2979e410eb1de0ce4ca0009fdeb1b793d2637bf1d45b3762611d06ce9e94" exitCode=0 Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.584790 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" event={"ID":"6fa0aab5-e3d3-4cb3-8409-296dcc548f30","Type":"ContainerDied","Data":"7e7e2979e410eb1de0ce4ca0009fdeb1b793d2637bf1d45b3762611d06ce9e94"} Feb 16 00:22:50 crc kubenswrapper[5114]: I0216 00:22:50.593386 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjdm8" event={"ID":"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9","Type":"ContainerStarted","Data":"8ab9de50958bc862fabca8b8af38f2b5c50a685d5824499b8118338032b79425"} Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.611686 5114 generic.go:358] "Generic (PLEG): container finished" podID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerID="7ed91bdc6f1638c7683b9d10a41040b62dce5c4b5046f19188c1f6ca670fc76f" exitCode=0 Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.611782 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" event={"ID":"6fa0aab5-e3d3-4cb3-8409-296dcc548f30","Type":"ContainerDied","Data":"7ed91bdc6f1638c7683b9d10a41040b62dce5c4b5046f19188c1f6ca670fc76f"} Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.614508 5114 generic.go:358] "Generic (PLEG): container finished" podID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerID="44436811702db93fbef110badd0e712d1ea394a13540c601d64207fba7090522" exitCode=0 Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.614577 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjdm8" event={"ID":"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9","Type":"ContainerDied","Data":"44436811702db93fbef110badd0e712d1ea394a13540c601d64207fba7090522"} Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.945059 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.990907 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-bundle\") pod \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.991214 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-util\") pod \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.991286 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fn9qc\" (UniqueName: \"kubernetes.io/projected/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-kube-api-access-fn9qc\") pod \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\" (UID: \"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e\") " Feb 16 00:22:51 crc kubenswrapper[5114]: I0216 00:22:51.993868 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-bundle" (OuterVolumeSpecName: "bundle") pod "d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" (UID: "d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.001041 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-kube-api-access-fn9qc" (OuterVolumeSpecName: "kube-api-access-fn9qc") pod "d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" (UID: "d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e"). InnerVolumeSpecName "kube-api-access-fn9qc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.005684 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-util" (OuterVolumeSpecName: "util") pod "d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" (UID: "d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.093587 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.093635 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.093652 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fn9qc\" (UniqueName: \"kubernetes.io/projected/d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e-kube-api-access-fn9qc\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.626306 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjdm8" event={"ID":"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9","Type":"ContainerStarted","Data":"9d25c612b7356dec495edd03f3f3639e8bda004d332b700d7d8c4669c8e8aa94"} Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.631008 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.631120 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/59d91eeadfbc177692af3c8c1571c9d473bd01e833d0373cf802b3d5729c5jt" event={"ID":"d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e","Type":"ContainerDied","Data":"1ad155161c7cd545514b7ff122f0b360076265c95b4ee125480f922a7bfa7823"} Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.631170 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ad155161c7cd545514b7ff122f0b360076265c95b4ee125480f922a7bfa7823" Feb 16 00:22:52 crc kubenswrapper[5114]: I0216 00:22:52.931964 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.007714 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-util\") pod \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.008007 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25mdw\" (UniqueName: \"kubernetes.io/projected/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-kube-api-access-25mdw\") pod \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.008056 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-bundle\") pod \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\" (UID: \"6fa0aab5-e3d3-4cb3-8409-296dcc548f30\") " Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.008754 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-bundle" (OuterVolumeSpecName: "bundle") pod "6fa0aab5-e3d3-4cb3-8409-296dcc548f30" (UID: "6fa0aab5-e3d3-4cb3-8409-296dcc548f30"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.017685 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-kube-api-access-25mdw" (OuterVolumeSpecName: "kube-api-access-25mdw") pod "6fa0aab5-e3d3-4cb3-8409-296dcc548f30" (UID: "6fa0aab5-e3d3-4cb3-8409-296dcc548f30"). InnerVolumeSpecName "kube-api-access-25mdw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.029651 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-util" (OuterVolumeSpecName: "util") pod "6fa0aab5-e3d3-4cb3-8409-296dcc548f30" (UID: "6fa0aab5-e3d3-4cb3-8409-296dcc548f30"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.110444 5114 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-bundle\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.110579 5114 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-util\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.110618 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25mdw\" (UniqueName: \"kubernetes.io/projected/6fa0aab5-e3d3-4cb3-8409-296dcc548f30-kube-api-access-25mdw\") on node \"crc\" DevicePath \"\"" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.645598 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.645610 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj" event={"ID":"6fa0aab5-e3d3-4cb3-8409-296dcc548f30","Type":"ContainerDied","Data":"7c8ba0a252d8e269cee55cb8474db8d48c5cda01943411c9e491401464ea030b"} Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.645784 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c8ba0a252d8e269cee55cb8474db8d48c5cda01943411c9e491401464ea030b" Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.648999 5114 generic.go:358] "Generic (PLEG): container finished" podID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerID="9d25c612b7356dec495edd03f3f3639e8bda004d332b700d7d8c4669c8e8aa94" exitCode=0 Feb 16 00:22:53 crc kubenswrapper[5114]: I0216 00:22:53.649152 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjdm8" event={"ID":"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9","Type":"ContainerDied","Data":"9d25c612b7356dec495edd03f3f3639e8bda004d332b700d7d8c4669c8e8aa94"} Feb 16 00:22:54 crc kubenswrapper[5114]: I0216 00:22:54.662735 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjdm8" event={"ID":"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9","Type":"ContainerStarted","Data":"2cd6e128cb99980a3c1c205ffa5547e182a804d3a391a89e79848e603f227c2a"} Feb 16 00:22:54 crc kubenswrapper[5114]: I0216 00:22:54.687386 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zjdm8" podStartSLOduration=5.058699236 podStartE2EDuration="5.687363774s" podCreationTimestamp="2026-02-16 00:22:49 +0000 UTC" firstStartedPulling="2026-02-16 00:22:51.615707249 +0000 UTC m=+847.996984067" lastFinishedPulling="2026-02-16 00:22:52.244371777 +0000 UTC m=+848.625648605" observedRunningTime="2026-02-16 00:22:54.685824061 +0000 UTC m=+851.067100889" watchObservedRunningTime="2026-02-16 00:22:54.687363774 +0000 UTC m=+851.068640592" Feb 16 00:22:59 crc kubenswrapper[5114]: I0216 00:22:59.984036 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:22:59 crc kubenswrapper[5114]: I0216 00:22:59.985082 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.392058 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-zk2fg"] Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393135 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerName="util" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393161 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerName="util" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393190 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerName="pull" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393197 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerName="pull" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393206 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerName="extract" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393213 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerName="extract" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393226 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerName="util" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393232 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerName="util" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393272 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerName="extract" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393281 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerName="extract" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393294 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerName="pull" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393299 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerName="pull" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393417 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5ec4fa2-571c-4a06-b7b7-26ceb4d84f3e" containerName="extract" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.393431 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="6fa0aab5-e3d3-4cb3-8409-296dcc548f30" containerName="extract" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.399100 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.401582 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-dz8bp\"" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.407443 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-zk2fg"] Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.545923 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc62l\" (UniqueName: \"kubernetes.io/projected/3a0bdffd-0870-40b1-a79d-90994889cdcb-kube-api-access-wc62l\") pod \"service-telemetry-operator-794b5697c7-zk2fg\" (UID: \"3a0bdffd-0870-40b1-a79d-90994889cdcb\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.546001 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3a0bdffd-0870-40b1-a79d-90994889cdcb-runner\") pod \"service-telemetry-operator-794b5697c7-zk2fg\" (UID: \"3a0bdffd-0870-40b1-a79d-90994889cdcb\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.647501 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3a0bdffd-0870-40b1-a79d-90994889cdcb-runner\") pod \"service-telemetry-operator-794b5697c7-zk2fg\" (UID: \"3a0bdffd-0870-40b1-a79d-90994889cdcb\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.647814 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wc62l\" (UniqueName: \"kubernetes.io/projected/3a0bdffd-0870-40b1-a79d-90994889cdcb-kube-api-access-wc62l\") pod \"service-telemetry-operator-794b5697c7-zk2fg\" (UID: \"3a0bdffd-0870-40b1-a79d-90994889cdcb\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.648419 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/3a0bdffd-0870-40b1-a79d-90994889cdcb-runner\") pod \"service-telemetry-operator-794b5697c7-zk2fg\" (UID: \"3a0bdffd-0870-40b1-a79d-90994889cdcb\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.672528 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc62l\" (UniqueName: \"kubernetes.io/projected/3a0bdffd-0870-40b1-a79d-90994889cdcb-kube-api-access-wc62l\") pod \"service-telemetry-operator-794b5697c7-zk2fg\" (UID: \"3a0bdffd-0870-40b1-a79d-90994889cdcb\") " pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.718613 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" Feb 16 00:23:00 crc kubenswrapper[5114]: I0216 00:23:00.980853 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-794b5697c7-zk2fg"] Feb 16 00:23:01 crc kubenswrapper[5114]: I0216 00:23:01.046935 5114 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zjdm8" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="registry-server" probeResult="failure" output=< Feb 16 00:23:01 crc kubenswrapper[5114]: timeout: failed to connect service ":50051" within 1s Feb 16 00:23:01 crc kubenswrapper[5114]: > Feb 16 00:23:01 crc kubenswrapper[5114]: I0216 00:23:01.735741 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" event={"ID":"3a0bdffd-0870-40b1-a79d-90994889cdcb","Type":"ContainerStarted","Data":"9d31116e856b85985309ac6a2fdbd73ffa99ae9b16bd65c5cd24f668d5fb1c87"} Feb 16 00:23:02 crc kubenswrapper[5114]: I0216 00:23:02.590965 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-l28x6"] Feb 16 00:23:02 crc kubenswrapper[5114]: I0216 00:23:02.599158 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-l28x6" Feb 16 00:23:02 crc kubenswrapper[5114]: I0216 00:23:02.602940 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-sbgxb\"" Feb 16 00:23:02 crc kubenswrapper[5114]: I0216 00:23:02.608588 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-l28x6"] Feb 16 00:23:02 crc kubenswrapper[5114]: I0216 00:23:02.688795 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx4sf\" (UniqueName: \"kubernetes.io/projected/20b29fa7-eab6-4c07-918e-1d3ee9767202-kube-api-access-sx4sf\") pod \"interconnect-operator-78b9bd8798-l28x6\" (UID: \"20b29fa7-eab6-4c07-918e-1d3ee9767202\") " pod="service-telemetry/interconnect-operator-78b9bd8798-l28x6" Feb 16 00:23:02 crc kubenswrapper[5114]: I0216 00:23:02.790968 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sx4sf\" (UniqueName: \"kubernetes.io/projected/20b29fa7-eab6-4c07-918e-1d3ee9767202-kube-api-access-sx4sf\") pod \"interconnect-operator-78b9bd8798-l28x6\" (UID: \"20b29fa7-eab6-4c07-918e-1d3ee9767202\") " pod="service-telemetry/interconnect-operator-78b9bd8798-l28x6" Feb 16 00:23:02 crc kubenswrapper[5114]: I0216 00:23:02.815487 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx4sf\" (UniqueName: \"kubernetes.io/projected/20b29fa7-eab6-4c07-918e-1d3ee9767202-kube-api-access-sx4sf\") pod \"interconnect-operator-78b9bd8798-l28x6\" (UID: \"20b29fa7-eab6-4c07-918e-1d3ee9767202\") " pod="service-telemetry/interconnect-operator-78b9bd8798-l28x6" Feb 16 00:23:02 crc kubenswrapper[5114]: I0216 00:23:02.918911 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-l28x6" Feb 16 00:23:03 crc kubenswrapper[5114]: I0216 00:23:03.177185 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-l28x6"] Feb 16 00:23:03 crc kubenswrapper[5114]: I0216 00:23:03.761233 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-l28x6" event={"ID":"20b29fa7-eab6-4c07-918e-1d3ee9767202","Type":"ContainerStarted","Data":"019e6d7841bba4db47d4a6e34a7f91fccf8926681331502bcd10a13ce5367aad"} Feb 16 00:23:08 crc kubenswrapper[5114]: I0216 00:23:08.806900 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" event={"ID":"3a0bdffd-0870-40b1-a79d-90994889cdcb","Type":"ContainerStarted","Data":"a9f12978f2e9e64501febdec07694fac6cd000368a641fcd5a6d673d1733f501"} Feb 16 00:23:08 crc kubenswrapper[5114]: I0216 00:23:08.828987 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-794b5697c7-zk2fg" podStartSLOduration=2.062913705 podStartE2EDuration="8.828958186s" podCreationTimestamp="2026-02-16 00:23:00 +0000 UTC" firstStartedPulling="2026-02-16 00:23:00.985671785 +0000 UTC m=+857.366948603" lastFinishedPulling="2026-02-16 00:23:07.751716266 +0000 UTC m=+864.132993084" observedRunningTime="2026-02-16 00:23:08.826294151 +0000 UTC m=+865.207570989" watchObservedRunningTime="2026-02-16 00:23:08.828958186 +0000 UTC m=+865.210235004" Feb 16 00:23:10 crc kubenswrapper[5114]: I0216 00:23:10.035884 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:23:10 crc kubenswrapper[5114]: I0216 00:23:10.101137 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:23:13 crc kubenswrapper[5114]: I0216 00:23:13.602128 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zjdm8"] Feb 16 00:23:13 crc kubenswrapper[5114]: I0216 00:23:13.603468 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zjdm8" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="registry-server" containerID="cri-o://2cd6e128cb99980a3c1c205ffa5547e182a804d3a391a89e79848e603f227c2a" gracePeriod=2 Feb 16 00:23:13 crc kubenswrapper[5114]: I0216 00:23:13.863837 5114 generic.go:358] "Generic (PLEG): container finished" podID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerID="2cd6e128cb99980a3c1c205ffa5547e182a804d3a391a89e79848e603f227c2a" exitCode=0 Feb 16 00:23:13 crc kubenswrapper[5114]: I0216 00:23:13.863929 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjdm8" event={"ID":"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9","Type":"ContainerDied","Data":"2cd6e128cb99980a3c1c205ffa5547e182a804d3a391a89e79848e603f227c2a"} Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.774014 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.799775 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-catalog-content\") pod \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.799938 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p2cq\" (UniqueName: \"kubernetes.io/projected/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-kube-api-access-5p2cq\") pod \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.800228 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-utilities\") pod \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\" (UID: \"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9\") " Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.801219 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-utilities" (OuterVolumeSpecName: "utilities") pod "31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" (UID: "31ca9539-edb3-43ab-b062-e6d8f6e4d9d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.814531 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-kube-api-access-5p2cq" (OuterVolumeSpecName: "kube-api-access-5p2cq") pod "31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" (UID: "31ca9539-edb3-43ab-b062-e6d8f6e4d9d9"). InnerVolumeSpecName "kube-api-access-5p2cq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.874992 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zjdm8" event={"ID":"31ca9539-edb3-43ab-b062-e6d8f6e4d9d9","Type":"ContainerDied","Data":"8ab9de50958bc862fabca8b8af38f2b5c50a685d5824499b8118338032b79425"} Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.875056 5114 scope.go:117] "RemoveContainer" containerID="2cd6e128cb99980a3c1c205ffa5547e182a804d3a391a89e79848e603f227c2a" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.875239 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zjdm8" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.879906 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-l28x6" event={"ID":"20b29fa7-eab6-4c07-918e-1d3ee9767202","Type":"ContainerStarted","Data":"a7be53387225003c9a86ce1562edab13046634efff31a0adac9762cebe5d6abf"} Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.897632 5114 scope.go:117] "RemoveContainer" containerID="9d25c612b7356dec495edd03f3f3639e8bda004d332b700d7d8c4669c8e8aa94" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.901653 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.901756 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5p2cq\" (UniqueName: \"kubernetes.io/projected/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-kube-api-access-5p2cq\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.905464 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-l28x6" podStartSLOduration=1.546487547 podStartE2EDuration="12.905436666s" podCreationTimestamp="2026-02-16 00:23:02 +0000 UTC" firstStartedPulling="2026-02-16 00:23:03.191359354 +0000 UTC m=+859.572636172" lastFinishedPulling="2026-02-16 00:23:14.550308473 +0000 UTC m=+870.931585291" observedRunningTime="2026-02-16 00:23:14.900188168 +0000 UTC m=+871.281464986" watchObservedRunningTime="2026-02-16 00:23:14.905436666 +0000 UTC m=+871.286713484" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.909141 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" (UID: "31ca9539-edb3-43ab-b062-e6d8f6e4d9d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:23:14 crc kubenswrapper[5114]: I0216 00:23:14.934577 5114 scope.go:117] "RemoveContainer" containerID="44436811702db93fbef110badd0e712d1ea394a13540c601d64207fba7090522" Feb 16 00:23:15 crc kubenswrapper[5114]: I0216 00:23:15.003492 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:23:15 crc kubenswrapper[5114]: I0216 00:23:15.216748 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zjdm8"] Feb 16 00:23:15 crc kubenswrapper[5114]: I0216 00:23:15.223864 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zjdm8"] Feb 16 00:23:15 crc kubenswrapper[5114]: I0216 00:23:15.826838 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" path="/var/lib/kubelet/pods/31ca9539-edb3-43ab-b062-e6d8f6e4d9d9/volumes" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.705756 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jvttc"] Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.707433 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="extract-content" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.707451 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="extract-content" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.707463 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="registry-server" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.707470 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="registry-server" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.707506 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="extract-utilities" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.707516 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="extract-utilities" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.707650 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="31ca9539-edb3-43ab-b062-e6d8f6e4d9d9" containerName="registry-server" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.717574 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.723494 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.723614 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-ng7gs\"" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.724485 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.724503 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.724627 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.724795 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.725110 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.729070 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jvttc"] Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.908130 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-config\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.908225 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.908277 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.908342 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.908371 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.910580 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-users\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:30 crc kubenswrapper[5114]: I0216 00:23:30.910769 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjcgf\" (UniqueName: \"kubernetes.io/projected/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-kube-api-access-zjcgf\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.012443 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.012507 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.012536 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.012554 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.012575 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-users\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.012611 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zjcgf\" (UniqueName: \"kubernetes.io/projected/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-kube-api-access-zjcgf\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.012683 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-config\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.013637 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-config\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.021322 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.021353 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.022713 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.023402 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.029078 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-users\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.035711 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjcgf\" (UniqueName: \"kubernetes.io/projected/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-kube-api-access-zjcgf\") pod \"default-interconnect-55bf8d5cb-jvttc\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.053048 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:23:31 crc kubenswrapper[5114]: I0216 00:23:31.542152 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jvttc"] Feb 16 00:23:32 crc kubenswrapper[5114]: I0216 00:23:32.048956 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" event={"ID":"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18","Type":"ContainerStarted","Data":"ddd667a9c07583f15421768e40b2c0300b35b8077e77354462072ca03b295c76"} Feb 16 00:23:37 crc kubenswrapper[5114]: I0216 00:23:37.092361 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" event={"ID":"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18","Type":"ContainerStarted","Data":"01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110"} Feb 16 00:23:37 crc kubenswrapper[5114]: I0216 00:23:37.140023 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" podStartSLOduration=2.313999099 podStartE2EDuration="7.139982024s" podCreationTimestamp="2026-02-16 00:23:30 +0000 UTC" firstStartedPulling="2026-02-16 00:23:31.552585865 +0000 UTC m=+887.933862693" lastFinishedPulling="2026-02-16 00:23:36.3785688 +0000 UTC m=+892.759845618" observedRunningTime="2026-02-16 00:23:37.137170265 +0000 UTC m=+893.518447173" watchObservedRunningTime="2026-02-16 00:23:37.139982024 +0000 UTC m=+893.521258882" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.946655 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.953663 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.964150 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.964517 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.964860 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.965542 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.967565 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.967597 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.967893 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.968129 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-92s4n\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.968309 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.968554 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Feb 16 00:23:41 crc kubenswrapper[5114]: I0216 00:23:41.976907 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085673 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-config-out\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085737 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085764 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085792 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-web-config\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085816 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-config\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085845 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-40eaeace-09fe-4e8c-867e-77d7678fbc95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40eaeace-09fe-4e8c-867e-77d7678fbc95\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085881 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085909 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-tls-assets\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085924 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085942 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbxw\" (UniqueName: \"kubernetes.io/projected/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-kube-api-access-txbxw\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085967 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.085997 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.188303 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.188401 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.188455 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-config-out\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.188483 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.188703 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: E0216 00:23:42.188650 5114 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 16 00:23:42 crc kubenswrapper[5114]: E0216 00:23:42.188852 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls podName:1ff8c5ee-b5d9-4135-a6bc-793a420274d5 nodeName:}" failed. No retries permitted until 2026-02-16 00:23:42.688821119 +0000 UTC m=+899.070097927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "1ff8c5ee-b5d9-4135-a6bc-793a420274d5") : secret "default-prometheus-proxy-tls" not found Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.189349 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-web-config\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.189407 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-config\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.189465 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-40eaeace-09fe-4e8c-867e-77d7678fbc95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40eaeace-09fe-4e8c-867e-77d7678fbc95\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.189535 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.189588 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-tls-assets\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.189614 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.189646 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-txbxw\" (UniqueName: \"kubernetes.io/projected/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-kube-api-access-txbxw\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.189467 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.190020 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.191039 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.196315 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.197608 5114 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.197646 5114 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-40eaeace-09fe-4e8c-867e-77d7678fbc95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40eaeace-09fe-4e8c-867e-77d7678fbc95\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/edd92566a5695470cee87ca8d9e92137400db7d1a8e070f2c9c36e2bb89cd0cb/globalmount\"" pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.197727 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-config-out\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.201179 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-web-config\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.210586 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.211884 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-tls-assets\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.213263 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-config\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.214398 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-txbxw\" (UniqueName: \"kubernetes.io/projected/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-kube-api-access-txbxw\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.233507 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-40eaeace-09fe-4e8c-867e-77d7678fbc95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-40eaeace-09fe-4e8c-867e-77d7678fbc95\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: I0216 00:23:42.699422 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:42 crc kubenswrapper[5114]: E0216 00:23:42.699646 5114 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Feb 16 00:23:42 crc kubenswrapper[5114]: E0216 00:23:42.700218 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls podName:1ff8c5ee-b5d9-4135-a6bc-793a420274d5 nodeName:}" failed. No retries permitted until 2026-02-16 00:23:43.700185997 +0000 UTC m=+900.081462815 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "1ff8c5ee-b5d9-4135-a6bc-793a420274d5") : secret "default-prometheus-proxy-tls" not found Feb 16 00:23:43 crc kubenswrapper[5114]: I0216 00:23:43.724009 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:43 crc kubenswrapper[5114]: I0216 00:23:43.737460 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/1ff8c5ee-b5d9-4135-a6bc-793a420274d5-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"1ff8c5ee-b5d9-4135-a6bc-793a420274d5\") " pod="service-telemetry/prometheus-default-0" Feb 16 00:23:43 crc kubenswrapper[5114]: I0216 00:23:43.780829 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Feb 16 00:23:44 crc kubenswrapper[5114]: I0216 00:23:44.324102 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Feb 16 00:23:45 crc kubenswrapper[5114]: I0216 00:23:45.153532 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1ff8c5ee-b5d9-4135-a6bc-793a420274d5","Type":"ContainerStarted","Data":"8c49a60839d2e8c6d840ad13efd9f5d06233085259077f3e44337287005d837f"} Feb 16 00:23:47 crc kubenswrapper[5114]: I0216 00:23:47.065992 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jlj6_c4627438-b1a6-4cc9-85f6-10e9dd97943b/kube-multus/0.log" Feb 16 00:23:47 crc kubenswrapper[5114]: I0216 00:23:47.066849 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jlj6_c4627438-b1a6-4cc9-85f6-10e9dd97943b/kube-multus/0.log" Feb 16 00:23:47 crc kubenswrapper[5114]: I0216 00:23:47.080080 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:23:47 crc kubenswrapper[5114]: I0216 00:23:47.081155 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.205932 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1ff8c5ee-b5d9-4135-a6bc-793a420274d5","Type":"ContainerStarted","Data":"294280f7f42b4e357601ed58bf08c90f9bfef1e66ad6b5a0294c4f349eb86a67"} Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.350426 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-778s9"] Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.404733 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-778s9"] Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.405368 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.536073 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-catalog-content\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.536667 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdzf\" (UniqueName: \"kubernetes.io/projected/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-kube-api-access-czdzf\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.536748 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-utilities\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.638026 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-czdzf\" (UniqueName: \"kubernetes.io/projected/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-kube-api-access-czdzf\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.638148 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-utilities\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.638236 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-catalog-content\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.638979 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-catalog-content\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.639512 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-utilities\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.670326 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-czdzf\" (UniqueName: \"kubernetes.io/projected/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-kube-api-access-czdzf\") pod \"community-operators-778s9\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:50 crc kubenswrapper[5114]: I0216 00:23:50.738864 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-778s9" Feb 16 00:23:51 crc kubenswrapper[5114]: I0216 00:23:51.269214 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-778s9"] Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.222203 5114 generic.go:358] "Generic (PLEG): container finished" podID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerID="af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63" exitCode=0 Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.222291 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-778s9" event={"ID":"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf","Type":"ContainerDied","Data":"af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63"} Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.222935 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-778s9" event={"ID":"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf","Type":"ContainerStarted","Data":"0ccbb363bdbc13086266680ff0bf742e9e7f6f2c406e4dcc29b53bfbf0a0296d"} Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.532068 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth"] Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.552633 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth"] Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.552892 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth" Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.671203 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59n4v\" (UniqueName: \"kubernetes.io/projected/e8a7463b-414b-493f-bee0-aee38e377445-kube-api-access-59n4v\") pod \"default-snmp-webhook-6774d8dfbc-7gxth\" (UID: \"e8a7463b-414b-493f-bee0-aee38e377445\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth" Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.773403 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-59n4v\" (UniqueName: \"kubernetes.io/projected/e8a7463b-414b-493f-bee0-aee38e377445-kube-api-access-59n4v\") pod \"default-snmp-webhook-6774d8dfbc-7gxth\" (UID: \"e8a7463b-414b-493f-bee0-aee38e377445\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth" Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.799616 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-59n4v\" (UniqueName: \"kubernetes.io/projected/e8a7463b-414b-493f-bee0-aee38e377445-kube-api-access-59n4v\") pod \"default-snmp-webhook-6774d8dfbc-7gxth\" (UID: \"e8a7463b-414b-493f-bee0-aee38e377445\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth" Feb 16 00:23:52 crc kubenswrapper[5114]: I0216 00:23:52.887023 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth" Feb 16 00:23:53 crc kubenswrapper[5114]: I0216 00:23:53.146585 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth"] Feb 16 00:23:53 crc kubenswrapper[5114]: I0216 00:23:53.234095 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth" event={"ID":"e8a7463b-414b-493f-bee0-aee38e377445","Type":"ContainerStarted","Data":"cf66eed8bb991344ca816b0d297ceed4aefd1d0a658965f7c2d7731368ef3083"} Feb 16 00:23:54 crc kubenswrapper[5114]: I0216 00:23:54.245755 5114 generic.go:358] "Generic (PLEG): container finished" podID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerID="e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee" exitCode=0 Feb 16 00:23:54 crc kubenswrapper[5114]: I0216 00:23:54.245864 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-778s9" event={"ID":"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf","Type":"ContainerDied","Data":"e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee"} Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.268090 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-778s9" event={"ID":"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf","Type":"ContainerStarted","Data":"e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6"} Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.297514 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-778s9" podStartSLOduration=4.259322039 podStartE2EDuration="5.297486998s" podCreationTimestamp="2026-02-16 00:23:50 +0000 UTC" firstStartedPulling="2026-02-16 00:23:52.223401406 +0000 UTC m=+908.604678224" lastFinishedPulling="2026-02-16 00:23:53.261566365 +0000 UTC m=+909.642843183" observedRunningTime="2026-02-16 00:23:55.289213745 +0000 UTC m=+911.670490583" watchObservedRunningTime="2026-02-16 00:23:55.297486998 +0000 UTC m=+911.678763816" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.710080 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.717155 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.719817 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.721203 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.721705 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.722011 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-vbs6j\"" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.722205 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.729145 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.740855 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826320 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7500ac7f-77f9-40dd-8129-bc9619baa44f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7500ac7f-77f9-40dd-8129-bc9619baa44f\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826375 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826411 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826446 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1a34684-d024-4cc2-a7fc-ffcdf071e216-tls-assets\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826499 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1a34684-d024-4cc2-a7fc-ffcdf071e216-config-out\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826527 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-config-volume\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826610 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w894b\" (UniqueName: \"kubernetes.io/projected/d1a34684-d024-4cc2-a7fc-ffcdf071e216-kube-api-access-w894b\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826652 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.826686 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-web-config\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.928815 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w894b\" (UniqueName: \"kubernetes.io/projected/d1a34684-d024-4cc2-a7fc-ffcdf071e216-kube-api-access-w894b\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.928895 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.928943 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-web-config\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.928983 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-7500ac7f-77f9-40dd-8129-bc9619baa44f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7500ac7f-77f9-40dd-8129-bc9619baa44f\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.929003 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.929040 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.929099 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1a34684-d024-4cc2-a7fc-ffcdf071e216-tls-assets\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.929146 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1a34684-d024-4cc2-a7fc-ffcdf071e216-config-out\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.929169 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-config-volume\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: E0216 00:23:55.932404 5114 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 16 00:23:55 crc kubenswrapper[5114]: E0216 00:23:55.932480 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls podName:d1a34684-d024-4cc2-a7fc-ffcdf071e216 nodeName:}" failed. No retries permitted until 2026-02-16 00:23:56.432455999 +0000 UTC m=+912.813732817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "d1a34684-d024-4cc2-a7fc-ffcdf071e216") : secret "default-alertmanager-proxy-tls" not found Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.939162 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-web-config\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.944348 5114 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.944422 5114 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-7500ac7f-77f9-40dd-8129-bc9619baa44f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7500ac7f-77f9-40dd-8129-bc9619baa44f\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0ff7596582c103cf9abf87f567d845402228bd50f0091858b7903fc159813fd8/globalmount\"" pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.945221 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/d1a34684-d024-4cc2-a7fc-ffcdf071e216-tls-assets\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.945484 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.945777 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/d1a34684-d024-4cc2-a7fc-ffcdf071e216-config-out\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.952608 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w894b\" (UniqueName: \"kubernetes.io/projected/d1a34684-d024-4cc2-a7fc-ffcdf071e216-kube-api-access-w894b\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.954143 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.954650 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-config-volume\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:55 crc kubenswrapper[5114]: I0216 00:23:55.973605 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-7500ac7f-77f9-40dd-8129-bc9619baa44f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7500ac7f-77f9-40dd-8129-bc9619baa44f\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:56 crc kubenswrapper[5114]: I0216 00:23:56.444467 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:56 crc kubenswrapper[5114]: E0216 00:23:56.444708 5114 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 16 00:23:56 crc kubenswrapper[5114]: E0216 00:23:56.444798 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls podName:d1a34684-d024-4cc2-a7fc-ffcdf071e216 nodeName:}" failed. No retries permitted until 2026-02-16 00:23:57.444775273 +0000 UTC m=+913.826052081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "d1a34684-d024-4cc2-a7fc-ffcdf071e216") : secret "default-alertmanager-proxy-tls" not found Feb 16 00:23:57 crc kubenswrapper[5114]: I0216 00:23:57.299980 5114 generic.go:358] "Generic (PLEG): container finished" podID="1ff8c5ee-b5d9-4135-a6bc-793a420274d5" containerID="294280f7f42b4e357601ed58bf08c90f9bfef1e66ad6b5a0294c4f349eb86a67" exitCode=0 Feb 16 00:23:57 crc kubenswrapper[5114]: I0216 00:23:57.300153 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1ff8c5ee-b5d9-4135-a6bc-793a420274d5","Type":"ContainerDied","Data":"294280f7f42b4e357601ed58bf08c90f9bfef1e66ad6b5a0294c4f349eb86a67"} Feb 16 00:23:57 crc kubenswrapper[5114]: I0216 00:23:57.466752 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:57 crc kubenswrapper[5114]: E0216 00:23:57.467440 5114 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 16 00:23:57 crc kubenswrapper[5114]: E0216 00:23:57.468041 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls podName:d1a34684-d024-4cc2-a7fc-ffcdf071e216 nodeName:}" failed. No retries permitted until 2026-02-16 00:23:59.468014589 +0000 UTC m=+915.849291427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "d1a34684-d024-4cc2-a7fc-ffcdf071e216") : secret "default-alertmanager-proxy-tls" not found Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.497842 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:23:59 crc kubenswrapper[5114]: E0216 00:23:59.498174 5114 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Feb 16 00:23:59 crc kubenswrapper[5114]: E0216 00:23:59.498402 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls podName:d1a34684-d024-4cc2-a7fc-ffcdf071e216 nodeName:}" failed. No retries permitted until 2026-02-16 00:24:03.498365841 +0000 UTC m=+919.879642699 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "d1a34684-d024-4cc2-a7fc-ffcdf071e216") : secret "default-alertmanager-proxy-tls" not found Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.611999 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v42sm"] Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.618864 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.621477 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v42sm"] Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.700751 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-utilities\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.700837 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whdvg\" (UniqueName: \"kubernetes.io/projected/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-kube-api-access-whdvg\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.700889 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-catalog-content\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.802503 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-utilities\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.802600 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whdvg\" (UniqueName: \"kubernetes.io/projected/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-kube-api-access-whdvg\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.802645 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-catalog-content\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.803119 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-utilities\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.803296 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-catalog-content\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.843874 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whdvg\" (UniqueName: \"kubernetes.io/projected/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-kube-api-access-whdvg\") pod \"certified-operators-v42sm\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:23:59 crc kubenswrapper[5114]: I0216 00:23:59.941238 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.134869 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520024-qzwr4"] Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.163157 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520024-qzwr4"] Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.163438 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520024-qzwr4" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.166347 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.167323 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.172516 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.212208 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k62xt\" (UniqueName: \"kubernetes.io/projected/2ded7593-ae73-4c96-ad73-bfe65049750b-kube-api-access-k62xt\") pod \"auto-csr-approver-29520024-qzwr4\" (UID: \"2ded7593-ae73-4c96-ad73-bfe65049750b\") " pod="openshift-infra/auto-csr-approver-29520024-qzwr4" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.313563 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k62xt\" (UniqueName: \"kubernetes.io/projected/2ded7593-ae73-4c96-ad73-bfe65049750b-kube-api-access-k62xt\") pod \"auto-csr-approver-29520024-qzwr4\" (UID: \"2ded7593-ae73-4c96-ad73-bfe65049750b\") " pod="openshift-infra/auto-csr-approver-29520024-qzwr4" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.353736 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k62xt\" (UniqueName: \"kubernetes.io/projected/2ded7593-ae73-4c96-ad73-bfe65049750b-kube-api-access-k62xt\") pod \"auto-csr-approver-29520024-qzwr4\" (UID: \"2ded7593-ae73-4c96-ad73-bfe65049750b\") " pod="openshift-infra/auto-csr-approver-29520024-qzwr4" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.481328 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520024-qzwr4" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.740387 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-778s9" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.740452 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-778s9" Feb 16 00:24:00 crc kubenswrapper[5114]: I0216 00:24:00.815841 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-778s9" Feb 16 00:24:01 crc kubenswrapper[5114]: I0216 00:24:01.411326 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-778s9" Feb 16 00:24:01 crc kubenswrapper[5114]: I0216 00:24:01.975562 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-778s9"] Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.053342 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520024-qzwr4"] Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.112145 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v42sm"] Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.375639 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-778s9" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerName="registry-server" containerID="cri-o://e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6" gracePeriod=2 Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.401810 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.565595 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.576576 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d1a34684-d024-4cc2-a7fc-ffcdf071e216-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"d1a34684-d024-4cc2-a7fc-ffcdf071e216\") " pod="service-telemetry/alertmanager-default-0" Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.583140 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.823697 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-778s9" Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.872016 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-utilities\") pod \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.872110 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-catalog-content\") pod \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.872212 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czdzf\" (UniqueName: \"kubernetes.io/projected/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-kube-api-access-czdzf\") pod \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\" (UID: \"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf\") " Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.873221 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-utilities" (OuterVolumeSpecName: "utilities") pod "a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" (UID: "a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.879273 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-kube-api-access-czdzf" (OuterVolumeSpecName: "kube-api-access-czdzf") pod "a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" (UID: "a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf"). InnerVolumeSpecName "kube-api-access-czdzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.973308 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-czdzf\" (UniqueName: \"kubernetes.io/projected/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-kube-api-access-czdzf\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:03 crc kubenswrapper[5114]: I0216 00:24:03.973336 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.048698 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Feb 16 00:24:04 crc kubenswrapper[5114]: W0216 00:24:04.056826 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1a34684_d024_4cc2_a7fc_ffcdf071e216.slice/crio-3e87e4db3f27f1e27b8f231f4cd63d1222e016416e1aa31683d21db8f28eff24 WatchSource:0}: Error finding container 3e87e4db3f27f1e27b8f231f4cd63d1222e016416e1aa31683d21db8f28eff24: Status 404 returned error can't find the container with id 3e87e4db3f27f1e27b8f231f4cd63d1222e016416e1aa31683d21db8f28eff24 Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.387012 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d1a34684-d024-4cc2-a7fc-ffcdf071e216","Type":"ContainerStarted","Data":"3e87e4db3f27f1e27b8f231f4cd63d1222e016416e1aa31683d21db8f28eff24"} Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.391685 5114 generic.go:358] "Generic (PLEG): container finished" podID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerID="e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6" exitCode=0 Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.391824 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-778s9" event={"ID":"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf","Type":"ContainerDied","Data":"e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6"} Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.391844 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-778s9" event={"ID":"a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf","Type":"ContainerDied","Data":"0ccbb363bdbc13086266680ff0bf742e9e7f6f2c406e4dcc29b53bfbf0a0296d"} Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.391861 5114 scope.go:117] "RemoveContainer" containerID="e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.391987 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-778s9" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.401921 5114 generic.go:358] "Generic (PLEG): container finished" podID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerID="aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890" exitCode=0 Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.402031 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v42sm" event={"ID":"66574a52-e8c2-4cc1-89d1-0aa7744df3ba","Type":"ContainerDied","Data":"aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890"} Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.402055 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v42sm" event={"ID":"66574a52-e8c2-4cc1-89d1-0aa7744df3ba","Type":"ContainerStarted","Data":"7c72cb3e54ccc14d786f65d5115a42f7602d5fdc5b5d593246f6c5314d613bc8"} Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.406379 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth" event={"ID":"e8a7463b-414b-493f-bee0-aee38e377445","Type":"ContainerStarted","Data":"608d73d1c59c2d10e276a4332da26dfedf02a3ebed8db653b15daa15b4858886"} Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.412578 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520024-qzwr4" event={"ID":"2ded7593-ae73-4c96-ad73-bfe65049750b","Type":"ContainerStarted","Data":"434e63c6a1ca6fa7f08d93eff44a81eb07e3229a3607f6c904d5a952e3b3ddf4"} Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.445378 5114 scope.go:117] "RemoveContainer" containerID="e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.453841 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" (UID: "a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.479697 5114 scope.go:117] "RemoveContainer" containerID="af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.480620 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.509198 5114 scope.go:117] "RemoveContainer" containerID="e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6" Feb 16 00:24:04 crc kubenswrapper[5114]: E0216 00:24:04.509709 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6\": container with ID starting with e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6 not found: ID does not exist" containerID="e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.509746 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6"} err="failed to get container status \"e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6\": rpc error: code = NotFound desc = could not find container \"e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6\": container with ID starting with e61b938990d5c9ee69b10058a032e8804e4f34ae08e72146174b1f398295abe6 not found: ID does not exist" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.509787 5114 scope.go:117] "RemoveContainer" containerID="e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee" Feb 16 00:24:04 crc kubenswrapper[5114]: E0216 00:24:04.509994 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee\": container with ID starting with e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee not found: ID does not exist" containerID="e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.510017 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee"} err="failed to get container status \"e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee\": rpc error: code = NotFound desc = could not find container \"e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee\": container with ID starting with e80aa43453fe2efae056ec30f7217f25e3155925babec494fee5ab2ea5f76bee not found: ID does not exist" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.510031 5114 scope.go:117] "RemoveContainer" containerID="af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63" Feb 16 00:24:04 crc kubenswrapper[5114]: E0216 00:24:04.510499 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63\": container with ID starting with af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63 not found: ID does not exist" containerID="af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.510562 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63"} err="failed to get container status \"af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63\": rpc error: code = NotFound desc = could not find container \"af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63\": container with ID starting with af5fb9c96f4af13e20bc3513a8d661fc87b6e273a59ddc9faf494b0e6f575e63 not found: ID does not exist" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.722386 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-7gxth" podStartSLOduration=2.495991337 podStartE2EDuration="12.722363037s" podCreationTimestamp="2026-02-16 00:23:52 +0000 UTC" firstStartedPulling="2026-02-16 00:23:53.169222387 +0000 UTC m=+909.550499205" lastFinishedPulling="2026-02-16 00:24:03.395594087 +0000 UTC m=+919.776870905" observedRunningTime="2026-02-16 00:24:04.435638116 +0000 UTC m=+920.816914934" watchObservedRunningTime="2026-02-16 00:24:04.722363037 +0000 UTC m=+921.103639865" Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.727861 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-778s9"] Feb 16 00:24:04 crc kubenswrapper[5114]: I0216 00:24:04.732957 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-778s9"] Feb 16 00:24:05 crc kubenswrapper[5114]: I0216 00:24:05.827647 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" path="/var/lib/kubelet/pods/a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf/volumes" Feb 16 00:24:06 crc kubenswrapper[5114]: I0216 00:24:06.431365 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d1a34684-d024-4cc2-a7fc-ffcdf071e216","Type":"ContainerStarted","Data":"b65b038b952ac1f08b543feb190b77bdab41244e2093201e8bb881b5394f4491"} Feb 16 00:24:10 crc kubenswrapper[5114]: I0216 00:24:10.463414 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v42sm" event={"ID":"66574a52-e8c2-4cc1-89d1-0aa7744df3ba","Type":"ContainerStarted","Data":"3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e"} Feb 16 00:24:11 crc kubenswrapper[5114]: I0216 00:24:11.472178 5114 generic.go:358] "Generic (PLEG): container finished" podID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerID="3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e" exitCode=0 Feb 16 00:24:11 crc kubenswrapper[5114]: I0216 00:24:11.473589 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v42sm" event={"ID":"66574a52-e8c2-4cc1-89d1-0aa7744df3ba","Type":"ContainerDied","Data":"3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e"} Feb 16 00:24:11 crc kubenswrapper[5114]: I0216 00:24:11.479169 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1ff8c5ee-b5d9-4135-a6bc-793a420274d5","Type":"ContainerStarted","Data":"49f4d5b47b6d4ee3e69261d9d6667c246170f21ab8546b3eb6ab9f4f4011c663"} Feb 16 00:24:11 crc kubenswrapper[5114]: I0216 00:24:11.481081 5114 generic.go:358] "Generic (PLEG): container finished" podID="2ded7593-ae73-4c96-ad73-bfe65049750b" containerID="0c392a92da7f8d0ea384a50a29794605f90d387390a5533f0b687b0b30e19671" exitCode=0 Feb 16 00:24:11 crc kubenswrapper[5114]: I0216 00:24:11.481226 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520024-qzwr4" event={"ID":"2ded7593-ae73-4c96-ad73-bfe65049750b","Type":"ContainerDied","Data":"0c392a92da7f8d0ea384a50a29794605f90d387390a5533f0b687b0b30e19671"} Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.080032 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2"] Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.085001 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerName="extract-utilities" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.085059 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerName="extract-utilities" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.085106 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerName="registry-server" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.085117 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerName="registry-server" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.085134 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerName="extract-content" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.085148 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerName="extract-content" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.085367 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="a021ec7f-e9ec-45ad-84a4-3ac5aa905fbf" containerName="registry-server" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.094386 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2"] Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.094560 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.097594 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-wc4tr\"" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.098571 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.098750 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.098872 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.199099 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.199160 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zcsc\" (UniqueName: \"kubernetes.io/projected/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-kube-api-access-8zcsc\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.199196 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.199222 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.199313 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.300542 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.300612 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.300671 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8zcsc\" (UniqueName: \"kubernetes.io/projected/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-kube-api-access-8zcsc\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.300707 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.300735 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: E0216 00:24:12.300765 5114 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 16 00:24:12 crc kubenswrapper[5114]: E0216 00:24:12.300886 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls podName:c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0 nodeName:}" failed. No retries permitted until 2026-02-16 00:24:12.800861798 +0000 UTC m=+929.182138616 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" (UID: "c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.301893 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.302061 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.340456 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.357224 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zcsc\" (UniqueName: \"kubernetes.io/projected/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-kube-api-access-8zcsc\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.495222 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v42sm" event={"ID":"66574a52-e8c2-4cc1-89d1-0aa7744df3ba","Type":"ContainerStarted","Data":"e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294"} Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.517866 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v42sm" podStartSLOduration=11.188591853 podStartE2EDuration="13.517840888s" podCreationTimestamp="2026-02-16 00:23:59 +0000 UTC" firstStartedPulling="2026-02-16 00:24:04.402784842 +0000 UTC m=+920.784061660" lastFinishedPulling="2026-02-16 00:24:06.732033847 +0000 UTC m=+923.113310695" observedRunningTime="2026-02-16 00:24:12.516888891 +0000 UTC m=+928.898165709" watchObservedRunningTime="2026-02-16 00:24:12.517840888 +0000 UTC m=+928.899117716" Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.807903 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:12 crc kubenswrapper[5114]: E0216 00:24:12.808083 5114 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Feb 16 00:24:12 crc kubenswrapper[5114]: E0216 00:24:12.808479 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls podName:c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0 nodeName:}" failed. No retries permitted until 2026-02-16 00:24:13.808457038 +0000 UTC m=+930.189733866 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" (UID: "c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0") : secret "default-cloud1-coll-meter-proxy-tls" not found Feb 16 00:24:12 crc kubenswrapper[5114]: I0216 00:24:12.904755 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520024-qzwr4" Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.010773 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k62xt\" (UniqueName: \"kubernetes.io/projected/2ded7593-ae73-4c96-ad73-bfe65049750b-kube-api-access-k62xt\") pod \"2ded7593-ae73-4c96-ad73-bfe65049750b\" (UID: \"2ded7593-ae73-4c96-ad73-bfe65049750b\") " Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.022382 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ded7593-ae73-4c96-ad73-bfe65049750b-kube-api-access-k62xt" (OuterVolumeSpecName: "kube-api-access-k62xt") pod "2ded7593-ae73-4c96-ad73-bfe65049750b" (UID: "2ded7593-ae73-4c96-ad73-bfe65049750b"). InnerVolumeSpecName "kube-api-access-k62xt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.112110 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k62xt\" (UniqueName: \"kubernetes.io/projected/2ded7593-ae73-4c96-ad73-bfe65049750b-kube-api-access-k62xt\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.506781 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1ff8c5ee-b5d9-4135-a6bc-793a420274d5","Type":"ContainerStarted","Data":"bc873c77190c4689bb1af991a9b96d5ab29a2e83ae1e1e86e9d00ec6a9740f87"} Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.509700 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520024-qzwr4" Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.509716 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520024-qzwr4" event={"ID":"2ded7593-ae73-4c96-ad73-bfe65049750b","Type":"ContainerDied","Data":"434e63c6a1ca6fa7f08d93eff44a81eb07e3229a3607f6c904d5a952e3b3ddf4"} Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.509891 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="434e63c6a1ca6fa7f08d93eff44a81eb07e3229a3607f6c904d5a952e3b3ddf4" Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.512726 5114 generic.go:358] "Generic (PLEG): container finished" podID="d1a34684-d024-4cc2-a7fc-ffcdf071e216" containerID="b65b038b952ac1f08b543feb190b77bdab41244e2093201e8bb881b5394f4491" exitCode=0 Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.512834 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d1a34684-d024-4cc2-a7fc-ffcdf071e216","Type":"ContainerDied","Data":"b65b038b952ac1f08b543feb190b77bdab41244e2093201e8bb881b5394f4491"} Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.824893 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.829663 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-lnlc2\" (UID: \"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.939017 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.973401 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29520018-5lsvl"] Feb 16 00:24:13 crc kubenswrapper[5114]: I0216 00:24:13.981220 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29520018-5lsvl"] Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.201752 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2"] Feb 16 00:24:14 crc kubenswrapper[5114]: W0216 00:24:14.666482 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc27b36c4_17e9_40f3_aaa0_e9ecd028b0e0.slice/crio-1acad930413d22b54751362256f615e52e7a906338dc12a1696554c103a82592 WatchSource:0}: Error finding container 1acad930413d22b54751362256f615e52e7a906338dc12a1696554c103a82592: Status 404 returned error can't find the container with id 1acad930413d22b54751362256f615e52e7a906338dc12a1696554c103a82592 Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.803606 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb"] Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.804453 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2ded7593-ae73-4c96-ad73-bfe65049750b" containerName="oc" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.804474 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ded7593-ae73-4c96-ad73-bfe65049750b" containerName="oc" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.804618 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="2ded7593-ae73-4c96-ad73-bfe65049750b" containerName="oc" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.808756 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.810924 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.811440 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.814278 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb"] Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.947208 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.948623 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.948675 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pptw\" (UniqueName: \"kubernetes.io/projected/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-kube-api-access-9pptw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.948811 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:14 crc kubenswrapper[5114]: I0216 00:24:14.948863 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.050223 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.050290 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9pptw\" (UniqueName: \"kubernetes.io/projected/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-kube-api-access-9pptw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.050325 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.050359 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.050460 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: E0216 00:24:15.051841 5114 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 16 00:24:15 crc kubenswrapper[5114]: E0216 00:24:15.051955 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls podName:5c9897b9-4b63-4f01-ad1e-acbd2aae855c nodeName:}" failed. No retries permitted until 2026-02-16 00:24:15.551930301 +0000 UTC m=+931.933207119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" (UID: "5c9897b9-4b63-4f01-ad1e-acbd2aae855c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.052015 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.052409 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.070919 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pptw\" (UniqueName: \"kubernetes.io/projected/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-kube-api-access-9pptw\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.071702 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.528952 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" event={"ID":"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0","Type":"ContainerStarted","Data":"1acad930413d22b54751362256f615e52e7a906338dc12a1696554c103a82592"} Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.559508 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:15 crc kubenswrapper[5114]: E0216 00:24:15.561260 5114 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 16 00:24:15 crc kubenswrapper[5114]: E0216 00:24:15.561476 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls podName:5c9897b9-4b63-4f01-ad1e-acbd2aae855c nodeName:}" failed. No retries permitted until 2026-02-16 00:24:16.561442345 +0000 UTC m=+932.942719163 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" (UID: "5c9897b9-4b63-4f01-ad1e-acbd2aae855c") : secret "default-cloud1-ceil-meter-proxy-tls" not found Feb 16 00:24:15 crc kubenswrapper[5114]: I0216 00:24:15.865559 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c67c5be5-e4b3-47d6-a4c7-95cba7f5830b" path="/var/lib/kubelet/pods/c67c5be5-e4b3-47d6-a4c7-95cba7f5830b/volumes" Feb 16 00:24:16 crc kubenswrapper[5114]: I0216 00:24:16.574415 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:16 crc kubenswrapper[5114]: I0216 00:24:16.581115 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5c9897b9-4b63-4f01-ad1e-acbd2aae855c-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb\" (UID: \"5c9897b9-4b63-4f01-ad1e-acbd2aae855c\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:16 crc kubenswrapper[5114]: I0216 00:24:16.634348 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.505487 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5"] Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.553092 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5"] Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.553290 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.555954 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.556257 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.708524 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.709157 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.709190 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7ddf84e-a562-4084-8975-cf18dd6558f7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.709329 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f7ddf84e-a562-4084-8975-cf18dd6558f7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.709582 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5czht\" (UniqueName: \"kubernetes.io/projected/f7ddf84e-a562-4084-8975-cf18dd6558f7-kube-api-access-5czht\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.811720 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.811797 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7ddf84e-a562-4084-8975-cf18dd6558f7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.811828 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f7ddf84e-a562-4084-8975-cf18dd6558f7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: E0216 00:24:18.812382 5114 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.812416 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5czht\" (UniqueName: \"kubernetes.io/projected/f7ddf84e-a562-4084-8975-cf18dd6558f7-kube-api-access-5czht\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: E0216 00:24:18.812531 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls podName:f7ddf84e-a562-4084-8975-cf18dd6558f7 nodeName:}" failed. No retries permitted until 2026-02-16 00:24:19.312489355 +0000 UTC m=+935.693766213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" (UID: "f7ddf84e-a562-4084-8975-cf18dd6558f7") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.812691 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7ddf84e-a562-4084-8975-cf18dd6558f7-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.812795 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.813221 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f7ddf84e-a562-4084-8975-cf18dd6558f7-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.829120 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5czht\" (UniqueName: \"kubernetes.io/projected/f7ddf84e-a562-4084-8975-cf18dd6558f7-kube-api-access-5czht\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:18 crc kubenswrapper[5114]: I0216 00:24:18.833810 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:19 crc kubenswrapper[5114]: I0216 00:24:19.320221 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:19 crc kubenswrapper[5114]: E0216 00:24:19.320434 5114 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Feb 16 00:24:19 crc kubenswrapper[5114]: E0216 00:24:19.320495 5114 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls podName:f7ddf84e-a562-4084-8975-cf18dd6558f7 nodeName:}" failed. No retries permitted until 2026-02-16 00:24:20.320479236 +0000 UTC m=+936.701756054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" (UID: "f7ddf84e-a562-4084-8975-cf18dd6558f7") : secret "default-cloud1-sens-meter-proxy-tls" not found Feb 16 00:24:19 crc kubenswrapper[5114]: I0216 00:24:19.941517 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:24:19 crc kubenswrapper[5114]: I0216 00:24:19.942736 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:24:20 crc kubenswrapper[5114]: I0216 00:24:20.000990 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:24:20 crc kubenswrapper[5114]: I0216 00:24:20.336138 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:20 crc kubenswrapper[5114]: I0216 00:24:20.357043 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f7ddf84e-a562-4084-8975-cf18dd6558f7-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5\" (UID: \"f7ddf84e-a562-4084-8975-cf18dd6558f7\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:20 crc kubenswrapper[5114]: I0216 00:24:20.379420 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" Feb 16 00:24:20 crc kubenswrapper[5114]: I0216 00:24:20.634090 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:24:20 crc kubenswrapper[5114]: I0216 00:24:20.683929 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v42sm"] Feb 16 00:24:20 crc kubenswrapper[5114]: I0216 00:24:20.891488 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb"] Feb 16 00:24:20 crc kubenswrapper[5114]: I0216 00:24:20.954037 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5"] Feb 16 00:24:21 crc kubenswrapper[5114]: I0216 00:24:21.580477 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" event={"ID":"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0","Type":"ContainerStarted","Data":"a98ae0d829333d5eccd780e5d051446fa6563fbe90da98015643477ead25ffa3"} Feb 16 00:24:21 crc kubenswrapper[5114]: I0216 00:24:21.585312 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" event={"ID":"5c9897b9-4b63-4f01-ad1e-acbd2aae855c","Type":"ContainerStarted","Data":"8ccbf2343bd6ff7dad7003f0df01c76286f625b00e5e50464511fbd94c9591d4"} Feb 16 00:24:21 crc kubenswrapper[5114]: I0216 00:24:21.588300 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"1ff8c5ee-b5d9-4135-a6bc-793a420274d5","Type":"ContainerStarted","Data":"a9e08bf2b8d5860f38da91b3e6a7ae7bec79405877f1041e2baff57c0446c2df"} Feb 16 00:24:21 crc kubenswrapper[5114]: I0216 00:24:21.596459 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d1a34684-d024-4cc2-a7fc-ffcdf071e216","Type":"ContainerStarted","Data":"f4761805a044152eb07ba872a8539043816d2bd7e9063fdf0a604d8ad59c8671"} Feb 16 00:24:21 crc kubenswrapper[5114]: I0216 00:24:21.598133 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" event={"ID":"f7ddf84e-a562-4084-8975-cf18dd6558f7","Type":"ContainerStarted","Data":"79e778b63420919ab6a1b5992cc9c7c836c9fed942545ee73327abf0599a1465"} Feb 16 00:24:21 crc kubenswrapper[5114]: I0216 00:24:21.617964 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=5.342583491 podStartE2EDuration="41.617936086s" podCreationTimestamp="2026-02-16 00:23:40 +0000 UTC" firstStartedPulling="2026-02-16 00:23:44.336451986 +0000 UTC m=+900.717728834" lastFinishedPulling="2026-02-16 00:24:20.611804611 +0000 UTC m=+936.993081429" observedRunningTime="2026-02-16 00:24:21.612795592 +0000 UTC m=+937.994072430" watchObservedRunningTime="2026-02-16 00:24:21.617936086 +0000 UTC m=+937.999212904" Feb 16 00:24:22 crc kubenswrapper[5114]: I0216 00:24:22.619121 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" event={"ID":"5c9897b9-4b63-4f01-ad1e-acbd2aae855c","Type":"ContainerStarted","Data":"1bea300d00934d9ca90ead77af2001483619446a85677f12aa27f5c7b59bef47"} Feb 16 00:24:22 crc kubenswrapper[5114]: I0216 00:24:22.622364 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" event={"ID":"f7ddf84e-a562-4084-8975-cf18dd6558f7","Type":"ContainerStarted","Data":"8bb07f1ce2b87cc2ea46c5eea4bcdb5d8931742a93fff6c87d94b96431c1f188"} Feb 16 00:24:22 crc kubenswrapper[5114]: I0216 00:24:22.622682 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v42sm" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerName="registry-server" containerID="cri-o://e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294" gracePeriod=2 Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.193994 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.287831 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-catalog-content\") pod \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.287917 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-utilities\") pod \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.287970 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whdvg\" (UniqueName: \"kubernetes.io/projected/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-kube-api-access-whdvg\") pod \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\" (UID: \"66574a52-e8c2-4cc1-89d1-0aa7744df3ba\") " Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.290932 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-utilities" (OuterVolumeSpecName: "utilities") pod "66574a52-e8c2-4cc1-89d1-0aa7744df3ba" (UID: "66574a52-e8c2-4cc1-89d1-0aa7744df3ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.297017 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-kube-api-access-whdvg" (OuterVolumeSpecName: "kube-api-access-whdvg") pod "66574a52-e8c2-4cc1-89d1-0aa7744df3ba" (UID: "66574a52-e8c2-4cc1-89d1-0aa7744df3ba"). InnerVolumeSpecName "kube-api-access-whdvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.326993 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66574a52-e8c2-4cc1-89d1-0aa7744df3ba" (UID: "66574a52-e8c2-4cc1-89d1-0aa7744df3ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.390239 5114 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.390313 5114 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.390326 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-whdvg\" (UniqueName: \"kubernetes.io/projected/66574a52-e8c2-4cc1-89d1-0aa7744df3ba-kube-api-access-whdvg\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.634983 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d1a34684-d024-4cc2-a7fc-ffcdf071e216","Type":"ContainerStarted","Data":"cc0c44eee58d8fe1ad4a73a617003ed5ebc9687453d1dd959a7f971e0fed499f"} Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.640373 5114 generic.go:358] "Generic (PLEG): container finished" podID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerID="e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294" exitCode=0 Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.640482 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v42sm" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.640472 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v42sm" event={"ID":"66574a52-e8c2-4cc1-89d1-0aa7744df3ba","Type":"ContainerDied","Data":"e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294"} Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.641332 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v42sm" event={"ID":"66574a52-e8c2-4cc1-89d1-0aa7744df3ba","Type":"ContainerDied","Data":"7c72cb3e54ccc14d786f65d5115a42f7602d5fdc5b5d593246f6c5314d613bc8"} Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.641360 5114 scope.go:117] "RemoveContainer" containerID="e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.675780 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v42sm"] Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.680504 5114 scope.go:117] "RemoveContainer" containerID="3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.682758 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v42sm"] Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.699940 5114 scope.go:117] "RemoveContainer" containerID="aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.732853 5114 scope.go:117] "RemoveContainer" containerID="e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294" Feb 16 00:24:23 crc kubenswrapper[5114]: E0216 00:24:23.734146 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294\": container with ID starting with e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294 not found: ID does not exist" containerID="e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.734320 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294"} err="failed to get container status \"e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294\": rpc error: code = NotFound desc = could not find container \"e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294\": container with ID starting with e4f4c7150aa3a6e2c0cbd9918aa0e3c693315c676d3d84c558d45e261cf84294 not found: ID does not exist" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.734351 5114 scope.go:117] "RemoveContainer" containerID="3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e" Feb 16 00:24:23 crc kubenswrapper[5114]: E0216 00:24:23.734745 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e\": container with ID starting with 3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e not found: ID does not exist" containerID="3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.734770 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e"} err="failed to get container status \"3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e\": rpc error: code = NotFound desc = could not find container \"3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e\": container with ID starting with 3804a5c88e81deed266c92454c8c47e22032198cc6d3b15045cb6295d900ff4e not found: ID does not exist" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.734786 5114 scope.go:117] "RemoveContainer" containerID="aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890" Feb 16 00:24:23 crc kubenswrapper[5114]: E0216 00:24:23.735108 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890\": container with ID starting with aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890 not found: ID does not exist" containerID="aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.735130 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890"} err="failed to get container status \"aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890\": rpc error: code = NotFound desc = could not find container \"aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890\": container with ID starting with aea6da99d92af21272fb9941cf7b14fd336a18a1e15d29698b4ac3eaabe00890 not found: ID does not exist" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.781576 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Feb 16 00:24:23 crc kubenswrapper[5114]: I0216 00:24:23.827982 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" path="/var/lib/kubelet/pods/66574a52-e8c2-4cc1-89d1-0aa7744df3ba/volumes" Feb 16 00:24:24 crc kubenswrapper[5114]: I0216 00:24:24.651476 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"d1a34684-d024-4cc2-a7fc-ffcdf071e216","Type":"ContainerStarted","Data":"878a63c91407081f53d0ae450b7511b782a024b4bbaf5793ff150ce4d3e19c39"} Feb 16 00:24:24 crc kubenswrapper[5114]: I0216 00:24:24.695870 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=20.430681576 podStartE2EDuration="30.695828429s" podCreationTimestamp="2026-02-16 00:23:54 +0000 UTC" firstStartedPulling="2026-02-16 00:24:13.514275912 +0000 UTC m=+929.895552720" lastFinishedPulling="2026-02-16 00:24:23.779422755 +0000 UTC m=+940.160699573" observedRunningTime="2026-02-16 00:24:24.682094973 +0000 UTC m=+941.063371791" watchObservedRunningTime="2026-02-16 00:24:24.695828429 +0000 UTC m=+941.077105247" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.500076 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp"] Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.501270 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerName="extract-utilities" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.501372 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerName="extract-utilities" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.501483 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerName="extract-content" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.501552 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerName="extract-content" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.501661 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerName="registry-server" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.501744 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerName="registry-server" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.502139 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="66574a52-e8c2-4cc1-89d1-0aa7744df3ba" containerName="registry-server" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.535173 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.536578 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp"] Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.539851 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.539851 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.623855 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.623923 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.623973 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8dlx\" (UniqueName: \"kubernetes.io/projected/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-kube-api-access-b8dlx\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.624019 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.725364 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.725425 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.725475 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b8dlx\" (UniqueName: \"kubernetes.io/projected/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-kube-api-access-b8dlx\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.725533 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.726544 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.726930 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.733358 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.744648 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8dlx\" (UniqueName: \"kubernetes.io/projected/7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f-kube-api-access-b8dlx\") pod \"default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp\" (UID: \"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:25 crc kubenswrapper[5114]: I0216 00:24:25.858878 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" Feb 16 00:24:26 crc kubenswrapper[5114]: I0216 00:24:26.908235 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx"] Feb 16 00:24:26 crc kubenswrapper[5114]: I0216 00:24:26.942930 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx"] Feb 16 00:24:26 crc kubenswrapper[5114]: I0216 00:24:26.943081 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:26 crc kubenswrapper[5114]: I0216 00:24:26.968023 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.053089 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/54228c4d-e0b6-4b56-84fc-f61ea9be6043-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.053138 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/54228c4d-e0b6-4b56-84fc-f61ea9be6043-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.053192 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mr24\" (UniqueName: \"kubernetes.io/projected/54228c4d-e0b6-4b56-84fc-f61ea9be6043-kube-api-access-9mr24\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.053240 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/54228c4d-e0b6-4b56-84fc-f61ea9be6043-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.154422 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9mr24\" (UniqueName: \"kubernetes.io/projected/54228c4d-e0b6-4b56-84fc-f61ea9be6043-kube-api-access-9mr24\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.154503 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/54228c4d-e0b6-4b56-84fc-f61ea9be6043-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.154565 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/54228c4d-e0b6-4b56-84fc-f61ea9be6043-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.154585 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/54228c4d-e0b6-4b56-84fc-f61ea9be6043-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.155516 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/54228c4d-e0b6-4b56-84fc-f61ea9be6043-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.155658 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/54228c4d-e0b6-4b56-84fc-f61ea9be6043-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.161329 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/54228c4d-e0b6-4b56-84fc-f61ea9be6043-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.172006 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mr24\" (UniqueName: \"kubernetes.io/projected/54228c4d-e0b6-4b56-84fc-f61ea9be6043-kube-api-access-9mr24\") pod \"default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx\" (UID: \"54228c4d-e0b6-4b56-84fc-f61ea9be6043\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:27 crc kubenswrapper[5114]: I0216 00:24:27.283079 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" Feb 16 00:24:28 crc kubenswrapper[5114]: I0216 00:24:28.781083 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Feb 16 00:24:28 crc kubenswrapper[5114]: I0216 00:24:28.842379 5114 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Feb 16 00:24:29 crc kubenswrapper[5114]: I0216 00:24:29.763056 5114 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Feb 16 00:24:33 crc kubenswrapper[5114]: I0216 00:24:33.739794 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx"] Feb 16 00:24:33 crc kubenswrapper[5114]: W0216 00:24:33.751725 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54228c4d_e0b6_4b56_84fc_f61ea9be6043.slice/crio-a30133c21df0ffa170cebe219e9cc6c8982275683e957eae221f129d83cfc48b WatchSource:0}: Error finding container a30133c21df0ffa170cebe219e9cc6c8982275683e957eae221f129d83cfc48b: Status 404 returned error can't find the container with id a30133c21df0ffa170cebe219e9cc6c8982275683e957eae221f129d83cfc48b Feb 16 00:24:33 crc kubenswrapper[5114]: I0216 00:24:33.802254 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp"] Feb 16 00:24:33 crc kubenswrapper[5114]: W0216 00:24:33.811636 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f6e7f61_2dbb_456d_aba2_6d912bbe0b4f.slice/crio-5cd7155d66fa9c6f1593ee7900e83c513200381e0ffa06ab42242ec06290baa9 WatchSource:0}: Error finding container 5cd7155d66fa9c6f1593ee7900e83c513200381e0ffa06ab42242ec06290baa9: Status 404 returned error can't find the container with id 5cd7155d66fa9c6f1593ee7900e83c513200381e0ffa06ab42242ec06290baa9 Feb 16 00:24:34 crc kubenswrapper[5114]: I0216 00:24:34.747721 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" event={"ID":"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0","Type":"ContainerStarted","Data":"16b7dc5ddcdf09a19f5c6afbfcbeb4f007967c5fb097cb762de01a0c31e8ee43"} Feb 16 00:24:34 crc kubenswrapper[5114]: I0216 00:24:34.752039 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" event={"ID":"5c9897b9-4b63-4f01-ad1e-acbd2aae855c","Type":"ContainerStarted","Data":"f29cb3e473e5645294abf00b3e9862371aaac156582b42d0a50fce52cf226a6b"} Feb 16 00:24:34 crc kubenswrapper[5114]: I0216 00:24:34.753706 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" event={"ID":"54228c4d-e0b6-4b56-84fc-f61ea9be6043","Type":"ContainerStarted","Data":"319b0a88312cb425a4be6972003c5a08e39f803ac14fd8ed99ae64d5730d0e92"} Feb 16 00:24:34 crc kubenswrapper[5114]: I0216 00:24:34.753771 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" event={"ID":"54228c4d-e0b6-4b56-84fc-f61ea9be6043","Type":"ContainerStarted","Data":"a30133c21df0ffa170cebe219e9cc6c8982275683e957eae221f129d83cfc48b"} Feb 16 00:24:34 crc kubenswrapper[5114]: I0216 00:24:34.755968 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" event={"ID":"f7ddf84e-a562-4084-8975-cf18dd6558f7","Type":"ContainerStarted","Data":"25ba1dc7db742080a2b71708e818aad52a8db484b298cee3f0660afcb4fe6ece"} Feb 16 00:24:34 crc kubenswrapper[5114]: I0216 00:24:34.757372 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" event={"ID":"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f","Type":"ContainerStarted","Data":"1fcb0f310017f0317fa6f16d3368c2355053922744b2aa1dc50c0d7eebaaab32"} Feb 16 00:24:34 crc kubenswrapper[5114]: I0216 00:24:34.757402 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" event={"ID":"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f","Type":"ContainerStarted","Data":"5cd7155d66fa9c6f1593ee7900e83c513200381e0ffa06ab42242ec06290baa9"} Feb 16 00:24:38 crc kubenswrapper[5114]: I0216 00:24:38.820215 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jvttc"] Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.799331 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" event={"ID":"f7ddf84e-a562-4084-8975-cf18dd6558f7","Type":"ContainerStarted","Data":"ba0499fd68c3f226871233a6b9657a00f00be54cfb6c1ebb33c4212afb08b45f"} Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.804716 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" event={"ID":"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f","Type":"ContainerStarted","Data":"1635d359b1ab3e5bb11f090bbc6057d8d746d0fd920559a35a0dd406c776ed06"} Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.811006 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" event={"ID":"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0","Type":"ContainerStarted","Data":"383c6f25d3d3f465c3465d47aa34e0e748d28a9a3e458546374d9ed5a9b41fa2"} Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.813324 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" event={"ID":"5c9897b9-4b63-4f01-ad1e-acbd2aae855c","Type":"ContainerStarted","Data":"2860736329e124f38bf27aac7989339e82af18e14ebbad0e5d02523d3c9bcad6"} Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.816630 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" podUID="3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" containerName="default-interconnect" containerID="cri-o://01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110" gracePeriod=30 Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.825925 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" event={"ID":"54228c4d-e0b6-4b56-84fc-f61ea9be6043","Type":"ContainerStarted","Data":"54f1a2c08d1f5cee5331cdb9a339a9c97336ec2e68fd9a624bade24f2b1bf2fa"} Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.837262 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" podStartSLOduration=3.526973169 podStartE2EDuration="21.83719587s" podCreationTimestamp="2026-02-16 00:24:18 +0000 UTC" firstStartedPulling="2026-02-16 00:24:20.976299448 +0000 UTC m=+937.357576256" lastFinishedPulling="2026-02-16 00:24:39.286522139 +0000 UTC m=+955.667798957" observedRunningTime="2026-02-16 00:24:39.834047652 +0000 UTC m=+956.215324500" watchObservedRunningTime="2026-02-16 00:24:39.83719587 +0000 UTC m=+956.218472728" Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.878628 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" podStartSLOduration=8.403380884 podStartE2EDuration="13.878592754s" podCreationTimestamp="2026-02-16 00:24:26 +0000 UTC" firstStartedPulling="2026-02-16 00:24:33.75335006 +0000 UTC m=+950.134626878" lastFinishedPulling="2026-02-16 00:24:39.22856193 +0000 UTC m=+955.609838748" observedRunningTime="2026-02-16 00:24:39.862123871 +0000 UTC m=+956.243400739" watchObservedRunningTime="2026-02-16 00:24:39.878592754 +0000 UTC m=+956.259869732" Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.902214 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" podStartSLOduration=9.593098648 podStartE2EDuration="14.902174587s" podCreationTimestamp="2026-02-16 00:24:25 +0000 UTC" firstStartedPulling="2026-02-16 00:24:33.813385268 +0000 UTC m=+950.194662086" lastFinishedPulling="2026-02-16 00:24:39.122461207 +0000 UTC m=+955.503738025" observedRunningTime="2026-02-16 00:24:39.895915811 +0000 UTC m=+956.277192639" watchObservedRunningTime="2026-02-16 00:24:39.902174587 +0000 UTC m=+956.283451415" Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.918931 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" podStartSLOduration=7.673320863 podStartE2EDuration="25.918903977s" podCreationTimestamp="2026-02-16 00:24:14 +0000 UTC" firstStartedPulling="2026-02-16 00:24:20.911942849 +0000 UTC m=+937.293219667" lastFinishedPulling="2026-02-16 00:24:39.157525963 +0000 UTC m=+955.538802781" observedRunningTime="2026-02-16 00:24:39.91721633 +0000 UTC m=+956.298493148" watchObservedRunningTime="2026-02-16 00:24:39.918903977 +0000 UTC m=+956.300180785" Feb 16 00:24:39 crc kubenswrapper[5114]: I0216 00:24:39.960575 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" podStartSLOduration=3.503477025 podStartE2EDuration="27.960541898s" podCreationTimestamp="2026-02-16 00:24:12 +0000 UTC" firstStartedPulling="2026-02-16 00:24:14.668951294 +0000 UTC m=+931.050228112" lastFinishedPulling="2026-02-16 00:24:39.126016167 +0000 UTC m=+955.507292985" observedRunningTime="2026-02-16 00:24:39.953891481 +0000 UTC m=+956.335168309" watchObservedRunningTime="2026-02-16 00:24:39.960541898 +0000 UTC m=+956.341818716" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.242974 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.286739 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-fbspj"] Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.287498 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" containerName="default-interconnect" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.287520 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" containerName="default-interconnect" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.287655 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" containerName="default-interconnect" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.291134 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-ca\") pod \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.291207 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-ca\") pod \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.291287 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-config\") pod \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.291322 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-users\") pod \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.291371 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-credentials\") pod \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.291609 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-credentials\") pod \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.291705 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjcgf\" (UniqueName: \"kubernetes.io/projected/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-kube-api-access-zjcgf\") pod \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\" (UID: \"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18\") " Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.293461 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" (UID: "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.299501 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" (UID: "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.300238 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-fbspj"] Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.306550 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" (UID: "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.312844 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" (UID: "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.315960 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-kube-api-access-zjcgf" (OuterVolumeSpecName: "kube-api-access-zjcgf") pod "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" (UID: "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18"). InnerVolumeSpecName "kube-api-access-zjcgf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.319396 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" (UID: "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.300382 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.321440 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" (UID: "3cf2055f-3f66-4ba8-b0a7-5ffb27982c18"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.393601 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394127 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/57b6502c-0320-4c84-984c-ed19935fbe7c-sasl-config\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394159 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394201 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-sasl-users\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394236 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394280 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4nt\" (UniqueName: \"kubernetes.io/projected/57b6502c-0320-4c84-984c-ed19935fbe7c-kube-api-access-sf4nt\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394342 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394474 5114 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394503 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zjcgf\" (UniqueName: \"kubernetes.io/projected/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-kube-api-access-zjcgf\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394517 5114 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394529 5114 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394544 5114 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394555 5114 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-sasl-users\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.394568 5114 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.496482 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.496716 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/57b6502c-0320-4c84-984c-ed19935fbe7c-sasl-config\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.496986 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.497056 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-sasl-users\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.497114 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.497141 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4nt\" (UniqueName: \"kubernetes.io/projected/57b6502c-0320-4c84-984c-ed19935fbe7c-kube-api-access-sf4nt\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.497266 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.498787 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/57b6502c-0320-4c84-984c-ed19935fbe7c-sasl-config\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.509357 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.509366 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.510118 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.514983 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4nt\" (UniqueName: \"kubernetes.io/projected/57b6502c-0320-4c84-984c-ed19935fbe7c-kube-api-access-sf4nt\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.516027 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.522849 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/57b6502c-0320-4c84-984c-ed19935fbe7c-sasl-users\") pod \"default-interconnect-55bf8d5cb-fbspj\" (UID: \"57b6502c-0320-4c84-984c-ed19935fbe7c\") " pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.679983 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.828160 5114 generic.go:358] "Generic (PLEG): container finished" podID="f7ddf84e-a562-4084-8975-cf18dd6558f7" containerID="25ba1dc7db742080a2b71708e818aad52a8db484b298cee3f0660afcb4fe6ece" exitCode=0 Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.828238 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" event={"ID":"f7ddf84e-a562-4084-8975-cf18dd6558f7","Type":"ContainerDied","Data":"25ba1dc7db742080a2b71708e818aad52a8db484b298cee3f0660afcb4fe6ece"} Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.828728 5114 scope.go:117] "RemoveContainer" containerID="25ba1dc7db742080a2b71708e818aad52a8db484b298cee3f0660afcb4fe6ece" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.844604 5114 generic.go:358] "Generic (PLEG): container finished" podID="7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f" containerID="1fcb0f310017f0317fa6f16d3368c2355053922744b2aa1dc50c0d7eebaaab32" exitCode=0 Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.845010 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" event={"ID":"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f","Type":"ContainerDied","Data":"1fcb0f310017f0317fa6f16d3368c2355053922744b2aa1dc50c0d7eebaaab32"} Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.847151 5114 scope.go:117] "RemoveContainer" containerID="1fcb0f310017f0317fa6f16d3368c2355053922744b2aa1dc50c0d7eebaaab32" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.880931 5114 generic.go:358] "Generic (PLEG): container finished" podID="3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" containerID="01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110" exitCode=0 Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.881160 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.881175 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" event={"ID":"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18","Type":"ContainerDied","Data":"01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110"} Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.881512 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-jvttc" event={"ID":"3cf2055f-3f66-4ba8-b0a7-5ffb27982c18","Type":"ContainerDied","Data":"ddd667a9c07583f15421768e40b2c0300b35b8077e77354462072ca03b295c76"} Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.881648 5114 scope.go:117] "RemoveContainer" containerID="01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.887967 5114 generic.go:358] "Generic (PLEG): container finished" podID="c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0" containerID="16b7dc5ddcdf09a19f5c6afbfcbeb4f007967c5fb097cb762de01a0c31e8ee43" exitCode=0 Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.888594 5114 scope.go:117] "RemoveContainer" containerID="16b7dc5ddcdf09a19f5c6afbfcbeb4f007967c5fb097cb762de01a0c31e8ee43" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.888767 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" event={"ID":"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0","Type":"ContainerDied","Data":"16b7dc5ddcdf09a19f5c6afbfcbeb4f007967c5fb097cb762de01a0c31e8ee43"} Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.896630 5114 generic.go:358] "Generic (PLEG): container finished" podID="5c9897b9-4b63-4f01-ad1e-acbd2aae855c" containerID="f29cb3e473e5645294abf00b3e9862371aaac156582b42d0a50fce52cf226a6b" exitCode=0 Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.896686 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" event={"ID":"5c9897b9-4b63-4f01-ad1e-acbd2aae855c","Type":"ContainerDied","Data":"f29cb3e473e5645294abf00b3e9862371aaac156582b42d0a50fce52cf226a6b"} Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.897141 5114 scope.go:117] "RemoveContainer" containerID="f29cb3e473e5645294abf00b3e9862371aaac156582b42d0a50fce52cf226a6b" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.940720 5114 scope.go:117] "RemoveContainer" containerID="01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.940981 5114 generic.go:358] "Generic (PLEG): container finished" podID="54228c4d-e0b6-4b56-84fc-f61ea9be6043" containerID="319b0a88312cb425a4be6972003c5a08e39f803ac14fd8ed99ae64d5730d0e92" exitCode=0 Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.941114 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" event={"ID":"54228c4d-e0b6-4b56-84fc-f61ea9be6043","Type":"ContainerDied","Data":"319b0a88312cb425a4be6972003c5a08e39f803ac14fd8ed99ae64d5730d0e92"} Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.941788 5114 scope.go:117] "RemoveContainer" containerID="319b0a88312cb425a4be6972003c5a08e39f803ac14fd8ed99ae64d5730d0e92" Feb 16 00:24:40 crc kubenswrapper[5114]: E0216 00:24:40.950496 5114 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110\": container with ID starting with 01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110 not found: ID does not exist" containerID="01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110" Feb 16 00:24:40 crc kubenswrapper[5114]: I0216 00:24:40.950592 5114 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110"} err="failed to get container status \"01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110\": rpc error: code = NotFound desc = could not find container \"01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110\": container with ID starting with 01473d8a999274930b9af96c0aaf8552141b54835665e2c95e14441b63522110 not found: ID does not exist" Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.004728 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jvttc"] Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.014643 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-jvttc"] Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.144414 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-fbspj"] Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.827222 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cf2055f-3f66-4ba8-b0a7-5ffb27982c18" path="/var/lib/kubelet/pods/3cf2055f-3f66-4ba8-b0a7-5ffb27982c18/volumes" Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.953466 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" event={"ID":"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0","Type":"ContainerStarted","Data":"cd08569caa312ebfcd15d9e2cc1e1e0e3b2a1e541a3c6f7627cd1d03012f20c3"} Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.958202 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" event={"ID":"5c9897b9-4b63-4f01-ad1e-acbd2aae855c","Type":"ContainerStarted","Data":"fcaa02bf15a4135fdb647f32a1113b74665f3279b0d2794fe4dfed77cad0dc57"} Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.961934 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" event={"ID":"54228c4d-e0b6-4b56-84fc-f61ea9be6043","Type":"ContainerStarted","Data":"bafd79dabcfa7dbc2c85962fafaad0f794391ae78db3396d42bc3e10db745ae9"} Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.964380 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" event={"ID":"57b6502c-0320-4c84-984c-ed19935fbe7c","Type":"ContainerStarted","Data":"06618c6c77ae6b343e1e5c36ba7d02cc19523d677f314dd89ce472cb02b65883"} Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.964431 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" event={"ID":"57b6502c-0320-4c84-984c-ed19935fbe7c","Type":"ContainerStarted","Data":"89230efe0ec3dea42075ed877f051d7e68e30e57a6c90c3166174af287a3ae90"} Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.968322 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" event={"ID":"f7ddf84e-a562-4084-8975-cf18dd6558f7","Type":"ContainerStarted","Data":"d8d26e4e87c2f24aefd4ee7bbdbc888b918ff91bc0eaa740f39d7dce6f8e3888"} Feb 16 00:24:41 crc kubenswrapper[5114]: I0216 00:24:41.972220 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" event={"ID":"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f","Type":"ContainerStarted","Data":"69e0f3cae4f71159d69c9928f2458fb0f468e7cda8bfc53734c7e64029a55851"} Feb 16 00:24:42 crc kubenswrapper[5114]: I0216 00:24:42.091810 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-fbspj" podStartSLOduration=4.091784956 podStartE2EDuration="4.091784956s" podCreationTimestamp="2026-02-16 00:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-16 00:24:42.084306276 +0000 UTC m=+958.465583094" watchObservedRunningTime="2026-02-16 00:24:42.091784956 +0000 UTC m=+958.473061774" Feb 16 00:24:42 crc kubenswrapper[5114]: I0216 00:24:42.985088 5114 generic.go:358] "Generic (PLEG): container finished" podID="f7ddf84e-a562-4084-8975-cf18dd6558f7" containerID="d8d26e4e87c2f24aefd4ee7bbdbc888b918ff91bc0eaa740f39d7dce6f8e3888" exitCode=0 Feb 16 00:24:42 crc kubenswrapper[5114]: I0216 00:24:42.985318 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" event={"ID":"f7ddf84e-a562-4084-8975-cf18dd6558f7","Type":"ContainerDied","Data":"d8d26e4e87c2f24aefd4ee7bbdbc888b918ff91bc0eaa740f39d7dce6f8e3888"} Feb 16 00:24:42 crc kubenswrapper[5114]: I0216 00:24:42.985899 5114 scope.go:117] "RemoveContainer" containerID="d8d26e4e87c2f24aefd4ee7bbdbc888b918ff91bc0eaa740f39d7dce6f8e3888" Feb 16 00:24:42 crc kubenswrapper[5114]: I0216 00:24:42.986346 5114 scope.go:117] "RemoveContainer" containerID="25ba1dc7db742080a2b71708e818aad52a8db484b298cee3f0660afcb4fe6ece" Feb 16 00:24:42 crc kubenswrapper[5114]: E0216 00:24:42.986661 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5_service-telemetry(f7ddf84e-a562-4084-8975-cf18dd6558f7)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" podUID="f7ddf84e-a562-4084-8975-cf18dd6558f7" Feb 16 00:24:42 crc kubenswrapper[5114]: I0216 00:24:42.995043 5114 generic.go:358] "Generic (PLEG): container finished" podID="7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f" containerID="69e0f3cae4f71159d69c9928f2458fb0f468e7cda8bfc53734c7e64029a55851" exitCode=0 Feb 16 00:24:42 crc kubenswrapper[5114]: I0216 00:24:42.996098 5114 scope.go:117] "RemoveContainer" containerID="69e0f3cae4f71159d69c9928f2458fb0f468e7cda8bfc53734c7e64029a55851" Feb 16 00:24:42 crc kubenswrapper[5114]: E0216 00:24:42.996467 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp_service-telemetry(7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" podUID="7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f" Feb 16 00:24:42 crc kubenswrapper[5114]: I0216 00:24:42.996822 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" event={"ID":"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f","Type":"ContainerDied","Data":"69e0f3cae4f71159d69c9928f2458fb0f468e7cda8bfc53734c7e64029a55851"} Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.022912 5114 generic.go:358] "Generic (PLEG): container finished" podID="c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0" containerID="cd08569caa312ebfcd15d9e2cc1e1e0e3b2a1e541a3c6f7627cd1d03012f20c3" exitCode=0 Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.023393 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" event={"ID":"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0","Type":"ContainerDied","Data":"cd08569caa312ebfcd15d9e2cc1e1e0e3b2a1e541a3c6f7627cd1d03012f20c3"} Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.023853 5114 scope.go:117] "RemoveContainer" containerID="cd08569caa312ebfcd15d9e2cc1e1e0e3b2a1e541a3c6f7627cd1d03012f20c3" Feb 16 00:24:43 crc kubenswrapper[5114]: E0216 00:24:43.024371 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-lnlc2_service-telemetry(c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" podUID="c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0" Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.035518 5114 generic.go:358] "Generic (PLEG): container finished" podID="5c9897b9-4b63-4f01-ad1e-acbd2aae855c" containerID="fcaa02bf15a4135fdb647f32a1113b74665f3279b0d2794fe4dfed77cad0dc57" exitCode=0 Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.035645 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" event={"ID":"5c9897b9-4b63-4f01-ad1e-acbd2aae855c","Type":"ContainerDied","Data":"fcaa02bf15a4135fdb647f32a1113b74665f3279b0d2794fe4dfed77cad0dc57"} Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.036209 5114 scope.go:117] "RemoveContainer" containerID="fcaa02bf15a4135fdb647f32a1113b74665f3279b0d2794fe4dfed77cad0dc57" Feb 16 00:24:43 crc kubenswrapper[5114]: E0216 00:24:43.036472 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb_service-telemetry(5c9897b9-4b63-4f01-ad1e-acbd2aae855c)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" podUID="5c9897b9-4b63-4f01-ad1e-acbd2aae855c" Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.036611 5114 scope.go:117] "RemoveContainer" containerID="1fcb0f310017f0317fa6f16d3368c2355053922744b2aa1dc50c0d7eebaaab32" Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.078483 5114 generic.go:358] "Generic (PLEG): container finished" podID="54228c4d-e0b6-4b56-84fc-f61ea9be6043" containerID="bafd79dabcfa7dbc2c85962fafaad0f794391ae78db3396d42bc3e10db745ae9" exitCode=0 Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.079430 5114 scope.go:117] "RemoveContainer" containerID="bafd79dabcfa7dbc2c85962fafaad0f794391ae78db3396d42bc3e10db745ae9" Feb 16 00:24:43 crc kubenswrapper[5114]: E0216 00:24:43.079583 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx_service-telemetry(54228c4d-e0b6-4b56-84fc-f61ea9be6043)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" podUID="54228c4d-e0b6-4b56-84fc-f61ea9be6043" Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.079843 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" event={"ID":"54228c4d-e0b6-4b56-84fc-f61ea9be6043","Type":"ContainerDied","Data":"bafd79dabcfa7dbc2c85962fafaad0f794391ae78db3396d42bc3e10db745ae9"} Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.099733 5114 scope.go:117] "RemoveContainer" containerID="16b7dc5ddcdf09a19f5c6afbfcbeb4f007967c5fb097cb762de01a0c31e8ee43" Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.153767 5114 scope.go:117] "RemoveContainer" containerID="f29cb3e473e5645294abf00b3e9862371aaac156582b42d0a50fce52cf226a6b" Feb 16 00:24:43 crc kubenswrapper[5114]: I0216 00:24:43.192736 5114 scope.go:117] "RemoveContainer" containerID="319b0a88312cb425a4be6972003c5a08e39f803ac14fd8ed99ae64d5730d0e92" Feb 16 00:24:44 crc kubenswrapper[5114]: I0216 00:24:44.090219 5114 scope.go:117] "RemoveContainer" containerID="cd08569caa312ebfcd15d9e2cc1e1e0e3b2a1e541a3c6f7627cd1d03012f20c3" Feb 16 00:24:44 crc kubenswrapper[5114]: E0216 00:24:44.090731 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-lnlc2_service-telemetry(c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" podUID="c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0" Feb 16 00:24:44 crc kubenswrapper[5114]: I0216 00:24:44.091611 5114 scope.go:117] "RemoveContainer" containerID="fcaa02bf15a4135fdb647f32a1113b74665f3279b0d2794fe4dfed77cad0dc57" Feb 16 00:24:44 crc kubenswrapper[5114]: E0216 00:24:44.091799 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb_service-telemetry(5c9897b9-4b63-4f01-ad1e-acbd2aae855c)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" podUID="5c9897b9-4b63-4f01-ad1e-acbd2aae855c" Feb 16 00:24:44 crc kubenswrapper[5114]: I0216 00:24:44.093853 5114 scope.go:117] "RemoveContainer" containerID="bafd79dabcfa7dbc2c85962fafaad0f794391ae78db3396d42bc3e10db745ae9" Feb 16 00:24:44 crc kubenswrapper[5114]: E0216 00:24:44.094207 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx_service-telemetry(54228c4d-e0b6-4b56-84fc-f61ea9be6043)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" podUID="54228c4d-e0b6-4b56-84fc-f61ea9be6043" Feb 16 00:24:44 crc kubenswrapper[5114]: I0216 00:24:44.096377 5114 scope.go:117] "RemoveContainer" containerID="d8d26e4e87c2f24aefd4ee7bbdbc888b918ff91bc0eaa740f39d7dce6f8e3888" Feb 16 00:24:44 crc kubenswrapper[5114]: E0216 00:24:44.096587 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5_service-telemetry(f7ddf84e-a562-4084-8975-cf18dd6558f7)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" podUID="f7ddf84e-a562-4084-8975-cf18dd6558f7" Feb 16 00:24:44 crc kubenswrapper[5114]: I0216 00:24:44.100769 5114 scope.go:117] "RemoveContainer" containerID="69e0f3cae4f71159d69c9928f2458fb0f468e7cda8bfc53734c7e64029a55851" Feb 16 00:24:44 crc kubenswrapper[5114]: E0216 00:24:44.101085 5114 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp_service-telemetry(7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" podUID="7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f" Feb 16 00:24:49 crc kubenswrapper[5114]: I0216 00:24:49.310197 5114 scope.go:117] "RemoveContainer" containerID="8bd5f4ce0c03de6b040840a3a83bd1508fe7fce6170f120c9ff883f946c8e06b" Feb 16 00:24:50 crc kubenswrapper[5114]: I0216 00:24:50.085240 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:24:50 crc kubenswrapper[5114]: I0216 00:24:50.086085 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:24:54 crc kubenswrapper[5114]: I0216 00:24:54.816351 5114 scope.go:117] "RemoveContainer" containerID="fcaa02bf15a4135fdb647f32a1113b74665f3279b0d2794fe4dfed77cad0dc57" Feb 16 00:24:56 crc kubenswrapper[5114]: I0216 00:24:56.817729 5114 scope.go:117] "RemoveContainer" containerID="cd08569caa312ebfcd15d9e2cc1e1e0e3b2a1e541a3c6f7627cd1d03012f20c3" Feb 16 00:24:57 crc kubenswrapper[5114]: I0216 00:24:57.214265 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb" event={"ID":"5c9897b9-4b63-4f01-ad1e-acbd2aae855c","Type":"ContainerStarted","Data":"bed5c450c882d32a9a63a1f4268f097eb15a96d80cb07ef361737c872dd3ef1f"} Feb 16 00:24:57 crc kubenswrapper[5114]: I0216 00:24:57.821035 5114 scope.go:117] "RemoveContainer" containerID="d8d26e4e87c2f24aefd4ee7bbdbc888b918ff91bc0eaa740f39d7dce6f8e3888" Feb 16 00:24:57 crc kubenswrapper[5114]: I0216 00:24:57.822840 5114 scope.go:117] "RemoveContainer" containerID="bafd79dabcfa7dbc2c85962fafaad0f794391ae78db3396d42bc3e10db745ae9" Feb 16 00:24:58 crc kubenswrapper[5114]: I0216 00:24:58.228360 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-lnlc2" event={"ID":"c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0","Type":"ContainerStarted","Data":"86d506d642816ed51d935dda55eb7333e9a681d046b7a404080c53022f84d3c9"} Feb 16 00:24:58 crc kubenswrapper[5114]: I0216 00:24:58.817072 5114 scope.go:117] "RemoveContainer" containerID="69e0f3cae4f71159d69c9928f2458fb0f468e7cda8bfc53734c7e64029a55851" Feb 16 00:24:59 crc kubenswrapper[5114]: I0216 00:24:59.237774 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx" event={"ID":"54228c4d-e0b6-4b56-84fc-f61ea9be6043","Type":"ContainerStarted","Data":"278e3cfc61a8cc88a04ad97bf7e25af264f46e76b9ed2c5282f77be353707813"} Feb 16 00:24:59 crc kubenswrapper[5114]: I0216 00:24:59.242452 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5" event={"ID":"f7ddf84e-a562-4084-8975-cf18dd6558f7","Type":"ContainerStarted","Data":"c4b141fc6bb09cfd5de5ff93436fe5ec4e097193589dcc5788342d6e4e97b794"} Feb 16 00:25:00 crc kubenswrapper[5114]: I0216 00:25:00.252598 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp" event={"ID":"7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f","Type":"ContainerStarted","Data":"a117ab436b3b28cf71d1d5f857ce51a53d3388e7080c2cd3fafa7f525adb8620"} Feb 16 00:25:11 crc kubenswrapper[5114]: I0216 00:25:11.608670 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Feb 16 00:25:11 crc kubenswrapper[5114]: I0216 00:25:11.960656 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 16 00:25:11 crc kubenswrapper[5114]: I0216 00:25:11.966043 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Feb 16 00:25:11 crc kubenswrapper[5114]: I0216 00:25:11.967585 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Feb 16 00:25:11 crc kubenswrapper[5114]: I0216 00:25:11.978824 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.023132 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/98b2ac1c-122e-4b2f-aac8-bf79123add8e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.023234 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-299t6\" (UniqueName: \"kubernetes.io/projected/98b2ac1c-122e-4b2f-aac8-bf79123add8e-kube-api-access-299t6\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.023421 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/98b2ac1c-122e-4b2f-aac8-bf79123add8e-qdr-test-config\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.125630 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/98b2ac1c-122e-4b2f-aac8-bf79123add8e-qdr-test-config\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.125790 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/98b2ac1c-122e-4b2f-aac8-bf79123add8e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.125860 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-299t6\" (UniqueName: \"kubernetes.io/projected/98b2ac1c-122e-4b2f-aac8-bf79123add8e-kube-api-access-299t6\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.127317 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/98b2ac1c-122e-4b2f-aac8-bf79123add8e-qdr-test-config\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.140495 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/98b2ac1c-122e-4b2f-aac8-bf79123add8e-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.156808 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-299t6\" (UniqueName: \"kubernetes.io/projected/98b2ac1c-122e-4b2f-aac8-bf79123add8e-kube-api-access-299t6\") pod \"qdr-test\" (UID: \"98b2ac1c-122e-4b2f-aac8-bf79123add8e\") " pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.292809 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Feb 16 00:25:12 crc kubenswrapper[5114]: I0216 00:25:12.603436 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Feb 16 00:25:13 crc kubenswrapper[5114]: I0216 00:25:13.440466 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"98b2ac1c-122e-4b2f-aac8-bf79123add8e","Type":"ContainerStarted","Data":"127154cae52fd4c9186c16a52dee2fa81c1542ed3ee7e05429eec32fa0ee7317"} Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.499129 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"98b2ac1c-122e-4b2f-aac8-bf79123add8e","Type":"ContainerStarted","Data":"d82a45a957ecf10b0471892eb3977cefd906576c2a1166c27572df28c7789faa"} Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.541445 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.429609593 podStartE2EDuration="8.541396959s" podCreationTimestamp="2026-02-16 00:25:11 +0000 UTC" firstStartedPulling="2026-02-16 00:25:12.61460097 +0000 UTC m=+988.995877818" lastFinishedPulling="2026-02-16 00:25:18.726388366 +0000 UTC m=+995.107665184" observedRunningTime="2026-02-16 00:25:19.528082434 +0000 UTC m=+995.909359292" watchObservedRunningTime="2026-02-16 00:25:19.541396959 +0000 UTC m=+995.922673807" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.868925 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-lnxmc"] Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.881656 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.882415 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-lnxmc"] Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.885025 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.885377 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.885891 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.886189 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.886337 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.886530 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.956680 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.956786 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrcrj\" (UniqueName: \"kubernetes.io/projected/209c3367-6f3f-4281-8160-d4778c61fd0a-kube-api-access-jrcrj\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.956833 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-config\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.957038 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-healthcheck-log\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.957065 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.957101 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-sensubility-config\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:19 crc kubenswrapper[5114]: I0216 00:25:19.957353 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.058744 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.058878 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jrcrj\" (UniqueName: \"kubernetes.io/projected/209c3367-6f3f-4281-8160-d4778c61fd0a-kube-api-access-jrcrj\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.058945 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-config\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.058998 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-healthcheck-log\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.059309 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.059471 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-sensubility-config\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.059680 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.059773 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.060181 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-healthcheck-log\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.060566 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-publisher\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.060725 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-config\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.061018 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.061072 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-sensubility-config\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.085387 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.085484 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.085887 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrcrj\" (UniqueName: \"kubernetes.io/projected/209c3367-6f3f-4281-8160-d4778c61fd0a-kube-api-access-jrcrj\") pod \"stf-smoketest-smoke1-lnxmc\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.209912 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.220584 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.221380 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.241951 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.370068 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vsjc\" (UniqueName: \"kubernetes.io/projected/cb665754-9906-463c-9b97-a1f2c201ee33-kube-api-access-2vsjc\") pod \"curl\" (UID: \"cb665754-9906-463c-9b97-a1f2c201ee33\") " pod="service-telemetry/curl" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.472653 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2vsjc\" (UniqueName: \"kubernetes.io/projected/cb665754-9906-463c-9b97-a1f2c201ee33-kube-api-access-2vsjc\") pod \"curl\" (UID: \"cb665754-9906-463c-9b97-a1f2c201ee33\") " pod="service-telemetry/curl" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.498555 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vsjc\" (UniqueName: \"kubernetes.io/projected/cb665754-9906-463c-9b97-a1f2c201ee33-kube-api-access-2vsjc\") pod \"curl\" (UID: \"cb665754-9906-463c-9b97-a1f2c201ee33\") " pod="service-telemetry/curl" Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.517234 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-lnxmc"] Feb 16 00:25:20 crc kubenswrapper[5114]: W0216 00:25:20.532782 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod209c3367_6f3f_4281_8160_d4778c61fd0a.slice/crio-0e9fc35cf077e90f86b7d635ab2716b2580fd57221b9868e8b49fe2cf2960148 WatchSource:0}: Error finding container 0e9fc35cf077e90f86b7d635ab2716b2580fd57221b9868e8b49fe2cf2960148: Status 404 returned error can't find the container with id 0e9fc35cf077e90f86b7d635ab2716b2580fd57221b9868e8b49fe2cf2960148 Feb 16 00:25:20 crc kubenswrapper[5114]: I0216 00:25:20.555855 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 16 00:25:21 crc kubenswrapper[5114]: I0216 00:25:21.083340 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Feb 16 00:25:21 crc kubenswrapper[5114]: I0216 00:25:21.531086 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"cb665754-9906-463c-9b97-a1f2c201ee33","Type":"ContainerStarted","Data":"714d7c63afb6867414dc3d68f4470de756fbb0aae566a5007903543813a7667f"} Feb 16 00:25:21 crc kubenswrapper[5114]: I0216 00:25:21.533227 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" event={"ID":"209c3367-6f3f-4281-8160-d4778c61fd0a","Type":"ContainerStarted","Data":"0e9fc35cf077e90f86b7d635ab2716b2580fd57221b9868e8b49fe2cf2960148"} Feb 16 00:25:29 crc kubenswrapper[5114]: I0216 00:25:29.607807 5114 generic.go:358] "Generic (PLEG): container finished" podID="cb665754-9906-463c-9b97-a1f2c201ee33" containerID="9ffeccebe476db07dbfcf43941e8b6c9739f1593d9c6814ec87acfcf36c741e1" exitCode=0 Feb 16 00:25:29 crc kubenswrapper[5114]: I0216 00:25:29.607891 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"cb665754-9906-463c-9b97-a1f2c201ee33","Type":"ContainerDied","Data":"9ffeccebe476db07dbfcf43941e8b6c9739f1593d9c6814ec87acfcf36c741e1"} Feb 16 00:25:29 crc kubenswrapper[5114]: I0216 00:25:29.610215 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" event={"ID":"209c3367-6f3f-4281-8160-d4778c61fd0a","Type":"ContainerStarted","Data":"47b481551c7442154d0d1fc4335b363cdd3c20e1dd3f78f6807029e7fc497b3c"} Feb 16 00:25:34 crc kubenswrapper[5114]: I0216 00:25:34.726429 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 16 00:25:34 crc kubenswrapper[5114]: I0216 00:25:34.806577 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2vsjc\" (UniqueName: \"kubernetes.io/projected/cb665754-9906-463c-9b97-a1f2c201ee33-kube-api-access-2vsjc\") pod \"cb665754-9906-463c-9b97-a1f2c201ee33\" (UID: \"cb665754-9906-463c-9b97-a1f2c201ee33\") " Feb 16 00:25:34 crc kubenswrapper[5114]: I0216 00:25:34.812634 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb665754-9906-463c-9b97-a1f2c201ee33-kube-api-access-2vsjc" (OuterVolumeSpecName: "kube-api-access-2vsjc") pod "cb665754-9906-463c-9b97-a1f2c201ee33" (UID: "cb665754-9906-463c-9b97-a1f2c201ee33"). InnerVolumeSpecName "kube-api-access-2vsjc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:25:34 crc kubenswrapper[5114]: I0216 00:25:34.910006 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2vsjc\" (UniqueName: \"kubernetes.io/projected/cb665754-9906-463c-9b97-a1f2c201ee33-kube-api-access-2vsjc\") on node \"crc\" DevicePath \"\"" Feb 16 00:25:34 crc kubenswrapper[5114]: I0216 00:25:34.910604 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_cb665754-9906-463c-9b97-a1f2c201ee33/curl/0.log" Feb 16 00:25:35 crc kubenswrapper[5114]: I0216 00:25:35.247181 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7gxth_e8a7463b-414b-493f-bee0-aee38e377445/prometheus-webhook-snmp/0.log" Feb 16 00:25:35 crc kubenswrapper[5114]: I0216 00:25:35.666051 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"cb665754-9906-463c-9b97-a1f2c201ee33","Type":"ContainerDied","Data":"714d7c63afb6867414dc3d68f4470de756fbb0aae566a5007903543813a7667f"} Feb 16 00:25:35 crc kubenswrapper[5114]: I0216 00:25:35.666113 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="714d7c63afb6867414dc3d68f4470de756fbb0aae566a5007903543813a7667f" Feb 16 00:25:35 crc kubenswrapper[5114]: I0216 00:25:35.666224 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Feb 16 00:25:35 crc kubenswrapper[5114]: I0216 00:25:35.679645 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" event={"ID":"209c3367-6f3f-4281-8160-d4778c61fd0a","Type":"ContainerStarted","Data":"037391e64c931d1d30f487f78de8ddca4915034c0579d2b571327a1db9d7eb02"} Feb 16 00:25:35 crc kubenswrapper[5114]: I0216 00:25:35.720057 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" podStartSLOduration=2.457739015 podStartE2EDuration="16.720023563s" podCreationTimestamp="2026-02-16 00:25:19 +0000 UTC" firstStartedPulling="2026-02-16 00:25:20.536119374 +0000 UTC m=+996.917396202" lastFinishedPulling="2026-02-16 00:25:34.798403932 +0000 UTC m=+1011.179680750" observedRunningTime="2026-02-16 00:25:35.703629932 +0000 UTC m=+1012.084906750" watchObservedRunningTime="2026-02-16 00:25:35.720023563 +0000 UTC m=+1012.101300421" Feb 16 00:25:50 crc kubenswrapper[5114]: I0216 00:25:50.085970 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:25:50 crc kubenswrapper[5114]: I0216 00:25:50.086976 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:25:50 crc kubenswrapper[5114]: I0216 00:25:50.087121 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:25:50 crc kubenswrapper[5114]: I0216 00:25:50.089153 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08e121677631f460690080580c06d5b5374b81d3fbafdd43ec22ad0e68333766"} pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:25:50 crc kubenswrapper[5114]: I0216 00:25:50.090071 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" containerID="cri-o://08e121677631f460690080580c06d5b5374b81d3fbafdd43ec22ad0e68333766" gracePeriod=600 Feb 16 00:25:50 crc kubenswrapper[5114]: I0216 00:25:50.818410 5114 generic.go:358] "Generic (PLEG): container finished" podID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerID="08e121677631f460690080580c06d5b5374b81d3fbafdd43ec22ad0e68333766" exitCode=0 Feb 16 00:25:50 crc kubenswrapper[5114]: I0216 00:25:50.818669 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerDied","Data":"08e121677631f460690080580c06d5b5374b81d3fbafdd43ec22ad0e68333766"} Feb 16 00:25:50 crc kubenswrapper[5114]: I0216 00:25:50.819199 5114 scope.go:117] "RemoveContainer" containerID="d1dfab39c6a9f63f318ef9f1041cbb88e1fb9256dbb5157a9f49af9886d305ad" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.130588 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520026-lmzkb"] Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.132590 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cb665754-9906-463c-9b97-a1f2c201ee33" containerName="curl" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.132624 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb665754-9906-463c-9b97-a1f2c201ee33" containerName="curl" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.132863 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="cb665754-9906-463c-9b97-a1f2c201ee33" containerName="curl" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.419778 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520026-lmzkb"] Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.420008 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520026-lmzkb" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.429926 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.430417 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.430548 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.595084 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wdnv\" (UniqueName: \"kubernetes.io/projected/172de72d-25e0-43a0-b18d-2e8c7e548a80-kube-api-access-8wdnv\") pod \"auto-csr-approver-29520026-lmzkb\" (UID: \"172de72d-25e0-43a0-b18d-2e8c7e548a80\") " pod="openshift-infra/auto-csr-approver-29520026-lmzkb" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.696202 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8wdnv\" (UniqueName: \"kubernetes.io/projected/172de72d-25e0-43a0-b18d-2e8c7e548a80-kube-api-access-8wdnv\") pod \"auto-csr-approver-29520026-lmzkb\" (UID: \"172de72d-25e0-43a0-b18d-2e8c7e548a80\") " pod="openshift-infra/auto-csr-approver-29520026-lmzkb" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.730483 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wdnv\" (UniqueName: \"kubernetes.io/projected/172de72d-25e0-43a0-b18d-2e8c7e548a80-kube-api-access-8wdnv\") pod \"auto-csr-approver-29520026-lmzkb\" (UID: \"172de72d-25e0-43a0-b18d-2e8c7e548a80\") " pod="openshift-infra/auto-csr-approver-29520026-lmzkb" Feb 16 00:26:00 crc kubenswrapper[5114]: I0216 00:26:00.759814 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520026-lmzkb" Feb 16 00:26:01 crc kubenswrapper[5114]: I0216 00:26:01.695352 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520026-lmzkb"] Feb 16 00:26:01 crc kubenswrapper[5114]: I0216 00:26:01.963595 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520026-lmzkb" event={"ID":"172de72d-25e0-43a0-b18d-2e8c7e548a80","Type":"ContainerStarted","Data":"e1ec4a40954a2c2740735e552f8592dcfcb6e4ee27c7e64473ece560060ec13f"} Feb 16 00:26:02 crc kubenswrapper[5114]: I0216 00:26:02.985326 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"6728bdb88e7106d5ac3aae01393284af609b0611c76a30dcea25efd3ae3bc66d"} Feb 16 00:26:05 crc kubenswrapper[5114]: I0216 00:26:05.037190 5114 generic.go:358] "Generic (PLEG): container finished" podID="172de72d-25e0-43a0-b18d-2e8c7e548a80" containerID="4805d2bf685e7178614a71b00f0919700c7fb9fd40dd4169f474ee457c41b31a" exitCode=0 Feb 16 00:26:05 crc kubenswrapper[5114]: I0216 00:26:05.037294 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520026-lmzkb" event={"ID":"172de72d-25e0-43a0-b18d-2e8c7e548a80","Type":"ContainerDied","Data":"4805d2bf685e7178614a71b00f0919700c7fb9fd40dd4169f474ee457c41b31a"} Feb 16 00:26:05 crc kubenswrapper[5114]: I0216 00:26:05.051628 5114 generic.go:358] "Generic (PLEG): container finished" podID="209c3367-6f3f-4281-8160-d4778c61fd0a" containerID="47b481551c7442154d0d1fc4335b363cdd3c20e1dd3f78f6807029e7fc497b3c" exitCode=0 Feb 16 00:26:05 crc kubenswrapper[5114]: I0216 00:26:05.051724 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" event={"ID":"209c3367-6f3f-4281-8160-d4778c61fd0a","Type":"ContainerDied","Data":"47b481551c7442154d0d1fc4335b363cdd3c20e1dd3f78f6807029e7fc497b3c"} Feb 16 00:26:05 crc kubenswrapper[5114]: I0216 00:26:05.052451 5114 scope.go:117] "RemoveContainer" containerID="47b481551c7442154d0d1fc4335b363cdd3c20e1dd3f78f6807029e7fc497b3c" Feb 16 00:26:05 crc kubenswrapper[5114]: I0216 00:26:05.420528 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7gxth_e8a7463b-414b-493f-bee0-aee38e377445/prometheus-webhook-snmp/0.log" Feb 16 00:26:06 crc kubenswrapper[5114]: I0216 00:26:06.407059 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520026-lmzkb" Feb 16 00:26:06 crc kubenswrapper[5114]: I0216 00:26:06.505039 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wdnv\" (UniqueName: \"kubernetes.io/projected/172de72d-25e0-43a0-b18d-2e8c7e548a80-kube-api-access-8wdnv\") pod \"172de72d-25e0-43a0-b18d-2e8c7e548a80\" (UID: \"172de72d-25e0-43a0-b18d-2e8c7e548a80\") " Feb 16 00:26:06 crc kubenswrapper[5114]: I0216 00:26:06.512699 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/172de72d-25e0-43a0-b18d-2e8c7e548a80-kube-api-access-8wdnv" (OuterVolumeSpecName: "kube-api-access-8wdnv") pod "172de72d-25e0-43a0-b18d-2e8c7e548a80" (UID: "172de72d-25e0-43a0-b18d-2e8c7e548a80"). InnerVolumeSpecName "kube-api-access-8wdnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:26:06 crc kubenswrapper[5114]: I0216 00:26:06.607209 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8wdnv\" (UniqueName: \"kubernetes.io/projected/172de72d-25e0-43a0-b18d-2e8c7e548a80-kube-api-access-8wdnv\") on node \"crc\" DevicePath \"\"" Feb 16 00:26:07 crc kubenswrapper[5114]: I0216 00:26:07.075033 5114 generic.go:358] "Generic (PLEG): container finished" podID="209c3367-6f3f-4281-8160-d4778c61fd0a" containerID="037391e64c931d1d30f487f78de8ddca4915034c0579d2b571327a1db9d7eb02" exitCode=0 Feb 16 00:26:07 crc kubenswrapper[5114]: I0216 00:26:07.075134 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" event={"ID":"209c3367-6f3f-4281-8160-d4778c61fd0a","Type":"ContainerDied","Data":"037391e64c931d1d30f487f78de8ddca4915034c0579d2b571327a1db9d7eb02"} Feb 16 00:26:07 crc kubenswrapper[5114]: I0216 00:26:07.077620 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520026-lmzkb" Feb 16 00:26:07 crc kubenswrapper[5114]: I0216 00:26:07.077664 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520026-lmzkb" event={"ID":"172de72d-25e0-43a0-b18d-2e8c7e548a80","Type":"ContainerDied","Data":"e1ec4a40954a2c2740735e552f8592dcfcb6e4ee27c7e64473ece560060ec13f"} Feb 16 00:26:07 crc kubenswrapper[5114]: I0216 00:26:07.077716 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ec4a40954a2c2740735e552f8592dcfcb6e4ee27c7e64473ece560060ec13f" Feb 16 00:26:07 crc kubenswrapper[5114]: I0216 00:26:07.485838 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29520020-9tjzj"] Feb 16 00:26:07 crc kubenswrapper[5114]: I0216 00:26:07.497436 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29520020-9tjzj"] Feb 16 00:26:07 crc kubenswrapper[5114]: I0216 00:26:07.831061 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="073d01c7-0d60-496f-9be5-9c82140bf609" path="/var/lib/kubelet/pods/073d01c7-0d60-496f-9be5-9c82140bf609/volumes" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.399645 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.438622 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-publisher\") pod \"209c3367-6f3f-4281-8160-d4778c61fd0a\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.438735 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-config\") pod \"209c3367-6f3f-4281-8160-d4778c61fd0a\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.438785 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrcrj\" (UniqueName: \"kubernetes.io/projected/209c3367-6f3f-4281-8160-d4778c61fd0a-kube-api-access-jrcrj\") pod \"209c3367-6f3f-4281-8160-d4778c61fd0a\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.438824 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-sensubility-config\") pod \"209c3367-6f3f-4281-8160-d4778c61fd0a\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.439669 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-entrypoint-script\") pod \"209c3367-6f3f-4281-8160-d4778c61fd0a\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.439813 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-healthcheck-log\") pod \"209c3367-6f3f-4281-8160-d4778c61fd0a\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.439859 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-entrypoint-script\") pod \"209c3367-6f3f-4281-8160-d4778c61fd0a\" (UID: \"209c3367-6f3f-4281-8160-d4778c61fd0a\") " Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.447016 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209c3367-6f3f-4281-8160-d4778c61fd0a-kube-api-access-jrcrj" (OuterVolumeSpecName: "kube-api-access-jrcrj") pod "209c3367-6f3f-4281-8160-d4778c61fd0a" (UID: "209c3367-6f3f-4281-8160-d4778c61fd0a"). InnerVolumeSpecName "kube-api-access-jrcrj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.457920 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "209c3367-6f3f-4281-8160-d4778c61fd0a" (UID: "209c3367-6f3f-4281-8160-d4778c61fd0a"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.458476 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "209c3367-6f3f-4281-8160-d4778c61fd0a" (UID: "209c3367-6f3f-4281-8160-d4778c61fd0a"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.462525 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "209c3367-6f3f-4281-8160-d4778c61fd0a" (UID: "209c3367-6f3f-4281-8160-d4778c61fd0a"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.468820 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "209c3367-6f3f-4281-8160-d4778c61fd0a" (UID: "209c3367-6f3f-4281-8160-d4778c61fd0a"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.477624 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "209c3367-6f3f-4281-8160-d4778c61fd0a" (UID: "209c3367-6f3f-4281-8160-d4778c61fd0a"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.479023 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "209c3367-6f3f-4281-8160-d4778c61fd0a" (UID: "209c3367-6f3f-4281-8160-d4778c61fd0a"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.542693 5114 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.542733 5114 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-healthcheck-log\") on node \"crc\" DevicePath \"\"" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.542746 5114 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.542757 5114 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.542769 5114 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-collectd-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.542781 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrcrj\" (UniqueName: \"kubernetes.io/projected/209c3367-6f3f-4281-8160-d4778c61fd0a-kube-api-access-jrcrj\") on node \"crc\" DevicePath \"\"" Feb 16 00:26:08 crc kubenswrapper[5114]: I0216 00:26:08.542792 5114 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/209c3367-6f3f-4281-8160-d4778c61fd0a-sensubility-config\") on node \"crc\" DevicePath \"\"" Feb 16 00:26:09 crc kubenswrapper[5114]: I0216 00:26:09.100534 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" event={"ID":"209c3367-6f3f-4281-8160-d4778c61fd0a","Type":"ContainerDied","Data":"0e9fc35cf077e90f86b7d635ab2716b2580fd57221b9868e8b49fe2cf2960148"} Feb 16 00:26:09 crc kubenswrapper[5114]: I0216 00:26:09.100584 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e9fc35cf077e90f86b7d635ab2716b2580fd57221b9868e8b49fe2cf2960148" Feb 16 00:26:09 crc kubenswrapper[5114]: I0216 00:26:09.100652 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-lnxmc" Feb 16 00:26:10 crc kubenswrapper[5114]: I0216 00:26:10.402586 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-lnxmc_209c3367-6f3f-4281-8160-d4778c61fd0a/smoketest-collectd/0.log" Feb 16 00:26:10 crc kubenswrapper[5114]: I0216 00:26:10.663002 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-lnxmc_209c3367-6f3f-4281-8160-d4778c61fd0a/smoketest-ceilometer/0.log" Feb 16 00:26:10 crc kubenswrapper[5114]: I0216 00:26:10.961996 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-fbspj_57b6502c-0320-4c84-984c-ed19935fbe7c/default-interconnect/0.log" Feb 16 00:26:11 crc kubenswrapper[5114]: I0216 00:26:11.285868 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-lnlc2_c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0/bridge/2.log" Feb 16 00:26:11 crc kubenswrapper[5114]: I0216 00:26:11.570077 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-787645d794-lnlc2_c27b36c4-17e9-40f3-aaa0-e9ecd028b0e0/sg-core/0.log" Feb 16 00:26:11 crc kubenswrapper[5114]: I0216 00:26:11.892638 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp_7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f/bridge/2.log" Feb 16 00:26:12 crc kubenswrapper[5114]: I0216 00:26:12.127936 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-7f5f675f58-rs7lp_7f6e7f61-2dbb-456d-aba2-6d912bbe0b4f/sg-core/0.log" Feb 16 00:26:12 crc kubenswrapper[5114]: I0216 00:26:12.381791 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb_5c9897b9-4b63-4f01-ad1e-acbd2aae855c/bridge/2.log" Feb 16 00:26:12 crc kubenswrapper[5114]: I0216 00:26:12.681289 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-545b564d9f-62bmb_5c9897b9-4b63-4f01-ad1e-acbd2aae855c/sg-core/0.log" Feb 16 00:26:12 crc kubenswrapper[5114]: I0216 00:26:12.936159 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx_54228c4d-e0b6-4b56-84fc-f61ea9be6043/bridge/2.log" Feb 16 00:26:13 crc kubenswrapper[5114]: I0216 00:26:13.264134 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-c6c675fc-g8wdx_54228c4d-e0b6-4b56-84fc-f61ea9be6043/sg-core/0.log" Feb 16 00:26:13 crc kubenswrapper[5114]: I0216 00:26:13.480671 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5_f7ddf84e-a562-4084-8975-cf18dd6558f7/bridge/2.log" Feb 16 00:26:13 crc kubenswrapper[5114]: I0216 00:26:13.750925 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-66d5b7c5fc-2xxc5_f7ddf84e-a562-4084-8975-cf18dd6558f7/sg-core/0.log" Feb 16 00:26:17 crc kubenswrapper[5114]: I0216 00:26:17.140478 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-76hjx_18677532-f05d-4f6d-bd9f-5ff26cdd64c8/operator/0.log" Feb 16 00:26:17 crc kubenswrapper[5114]: I0216 00:26:17.417141 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_1ff8c5ee-b5d9-4135-a6bc-793a420274d5/prometheus/0.log" Feb 16 00:26:17 crc kubenswrapper[5114]: I0216 00:26:17.685642 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_85e5d57a-83dc-4ddd-9268-29b9441ba077/elasticsearch/0.log" Feb 16 00:26:17 crc kubenswrapper[5114]: I0216 00:26:17.954381 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6774d8dfbc-7gxth_e8a7463b-414b-493f-bee0-aee38e377445/prometheus-webhook-snmp/0.log" Feb 16 00:26:18 crc kubenswrapper[5114]: I0216 00:26:18.197940 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_d1a34684-d024-4cc2-a7fc-ffcdf071e216/alertmanager/0.log" Feb 16 00:26:30 crc kubenswrapper[5114]: I0216 00:26:30.549657 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-794b5697c7-zk2fg_3a0bdffd-0870-40b1-a79d-90994889cdcb/operator/0.log" Feb 16 00:26:34 crc kubenswrapper[5114]: I0216 00:26:34.108103 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-97b85656c-76hjx_18677532-f05d-4f6d-bd9f-5ff26cdd64c8/operator/0.log" Feb 16 00:26:34 crc kubenswrapper[5114]: I0216 00:26:34.357107 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_98b2ac1c-122e-4b2f-aac8-bf79123add8e/qdr/0.log" Feb 16 00:26:49 crc kubenswrapper[5114]: I0216 00:26:49.462905 5114 scope.go:117] "RemoveContainer" containerID="838b6d8aad50dbc2ebe38dd81c8c9eb52e8a766058b35e986f471084c1cff7bf" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.206916 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-dwkkv/must-gather-dwww7"] Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208460 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="209c3367-6f3f-4281-8160-d4778c61fd0a" containerName="smoketest-ceilometer" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208479 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="209c3367-6f3f-4281-8160-d4778c61fd0a" containerName="smoketest-ceilometer" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208490 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="172de72d-25e0-43a0-b18d-2e8c7e548a80" containerName="oc" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208497 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="172de72d-25e0-43a0-b18d-2e8c7e548a80" containerName="oc" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208522 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="209c3367-6f3f-4281-8160-d4778c61fd0a" containerName="smoketest-collectd" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208531 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="209c3367-6f3f-4281-8160-d4778c61fd0a" containerName="smoketest-collectd" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208695 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="209c3367-6f3f-4281-8160-d4778c61fd0a" containerName="smoketest-ceilometer" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208711 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="172de72d-25e0-43a0-b18d-2e8c7e548a80" containerName="oc" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.208725 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="209c3367-6f3f-4281-8160-d4778c61fd0a" containerName="smoketest-collectd" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.215404 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.220941 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-dwkkv\"/\"kube-root-ca.crt\"" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.221195 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-dwkkv\"/\"openshift-service-ca.crt\"" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.221376 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-dwkkv\"/\"default-dockercfg-f5rtl\"" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.226363 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dwkkv/must-gather-dwww7"] Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.377005 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0b7e870-7d23-4439-be1a-b364faf90d09-must-gather-output\") pod \"must-gather-dwww7\" (UID: \"b0b7e870-7d23-4439-be1a-b364faf90d09\") " pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.377454 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzlt7\" (UniqueName: \"kubernetes.io/projected/b0b7e870-7d23-4439-be1a-b364faf90d09-kube-api-access-zzlt7\") pod \"must-gather-dwww7\" (UID: \"b0b7e870-7d23-4439-be1a-b364faf90d09\") " pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.479278 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zzlt7\" (UniqueName: \"kubernetes.io/projected/b0b7e870-7d23-4439-be1a-b364faf90d09-kube-api-access-zzlt7\") pod \"must-gather-dwww7\" (UID: \"b0b7e870-7d23-4439-be1a-b364faf90d09\") " pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.479361 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0b7e870-7d23-4439-be1a-b364faf90d09-must-gather-output\") pod \"must-gather-dwww7\" (UID: \"b0b7e870-7d23-4439-be1a-b364faf90d09\") " pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.479730 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0b7e870-7d23-4439-be1a-b364faf90d09-must-gather-output\") pod \"must-gather-dwww7\" (UID: \"b0b7e870-7d23-4439-be1a-b364faf90d09\") " pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.499917 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzlt7\" (UniqueName: \"kubernetes.io/projected/b0b7e870-7d23-4439-be1a-b364faf90d09-kube-api-access-zzlt7\") pod \"must-gather-dwww7\" (UID: \"b0b7e870-7d23-4439-be1a-b364faf90d09\") " pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.534817 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:27:11 crc kubenswrapper[5114]: I0216 00:27:11.969411 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-dwkkv/must-gather-dwww7"] Feb 16 00:27:12 crc kubenswrapper[5114]: I0216 00:27:12.693141 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwkkv/must-gather-dwww7" event={"ID":"b0b7e870-7d23-4439-be1a-b364faf90d09","Type":"ContainerStarted","Data":"3dc7fa2b55f871deea990649650627c8e4247a161d03d992c8f929fd213f1b0a"} Feb 16 00:27:18 crc kubenswrapper[5114]: I0216 00:27:18.747773 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwkkv/must-gather-dwww7" event={"ID":"b0b7e870-7d23-4439-be1a-b364faf90d09","Type":"ContainerStarted","Data":"370c1eeeb45b89dce99245fdc72724c1caea0045a8a80c83576aa248ed79394f"} Feb 16 00:27:18 crc kubenswrapper[5114]: I0216 00:27:18.748496 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwkkv/must-gather-dwww7" event={"ID":"b0b7e870-7d23-4439-be1a-b364faf90d09","Type":"ContainerStarted","Data":"05f751d5ae3295a5129cb310a0b37d6e6936c173e875031f3b1cbc61819e5945"} Feb 16 00:27:18 crc kubenswrapper[5114]: I0216 00:27:18.765800 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-dwkkv/must-gather-dwww7" podStartSLOduration=1.58205896 podStartE2EDuration="7.765779948s" podCreationTimestamp="2026-02-16 00:27:11 +0000 UTC" firstStartedPulling="2026-02-16 00:27:11.98215007 +0000 UTC m=+1108.363426898" lastFinishedPulling="2026-02-16 00:27:18.165871068 +0000 UTC m=+1114.547147886" observedRunningTime="2026-02-16 00:27:18.763866534 +0000 UTC m=+1115.145143372" watchObservedRunningTime="2026-02-16 00:27:18.765779948 +0000 UTC m=+1115.147056766" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.140389 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520028-cqrdt"] Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.150773 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520028-cqrdt"] Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.150957 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520028-cqrdt" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.154236 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.154287 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.154897 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.232938 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xtl6\" (UniqueName: \"kubernetes.io/projected/52099491-2311-4415-9b46-dd113bef3357-kube-api-access-9xtl6\") pod \"auto-csr-approver-29520028-cqrdt\" (UID: \"52099491-2311-4415-9b46-dd113bef3357\") " pod="openshift-infra/auto-csr-approver-29520028-cqrdt" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.335284 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9xtl6\" (UniqueName: \"kubernetes.io/projected/52099491-2311-4415-9b46-dd113bef3357-kube-api-access-9xtl6\") pod \"auto-csr-approver-29520028-cqrdt\" (UID: \"52099491-2311-4415-9b46-dd113bef3357\") " pod="openshift-infra/auto-csr-approver-29520028-cqrdt" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.361844 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xtl6\" (UniqueName: \"kubernetes.io/projected/52099491-2311-4415-9b46-dd113bef3357-kube-api-access-9xtl6\") pod \"auto-csr-approver-29520028-cqrdt\" (UID: \"52099491-2311-4415-9b46-dd113bef3357\") " pod="openshift-infra/auto-csr-approver-29520028-cqrdt" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.473137 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520028-cqrdt" Feb 16 00:28:00 crc kubenswrapper[5114]: I0216 00:28:00.723219 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520028-cqrdt"] Feb 16 00:28:01 crc kubenswrapper[5114]: I0216 00:28:01.096053 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520028-cqrdt" event={"ID":"52099491-2311-4415-9b46-dd113bef3357","Type":"ContainerStarted","Data":"f03391a5719ef43f2fed4cbc746734b139e21b5dfaecf2fea311b04c18f1a57f"} Feb 16 00:28:03 crc kubenswrapper[5114]: I0216 00:28:03.116418 5114 generic.go:358] "Generic (PLEG): container finished" podID="52099491-2311-4415-9b46-dd113bef3357" containerID="9022611d1a274d1222d3674fd9c60a6b13ea739c84d9c8086210a7c96ffb5b2e" exitCode=0 Feb 16 00:28:03 crc kubenswrapper[5114]: I0216 00:28:03.116501 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520028-cqrdt" event={"ID":"52099491-2311-4415-9b46-dd113bef3357","Type":"ContainerDied","Data":"9022611d1a274d1222d3674fd9c60a6b13ea739c84d9c8086210a7c96ffb5b2e"} Feb 16 00:28:04 crc kubenswrapper[5114]: I0216 00:28:04.348145 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-7qhtw_cd5244de-0460-4f31-914d-85541d3c975f/control-plane-machine-set-operator/0.log" Feb 16 00:28:04 crc kubenswrapper[5114]: I0216 00:28:04.364468 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520028-cqrdt" Feb 16 00:28:04 crc kubenswrapper[5114]: I0216 00:28:04.497835 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xtl6\" (UniqueName: \"kubernetes.io/projected/52099491-2311-4415-9b46-dd113bef3357-kube-api-access-9xtl6\") pod \"52099491-2311-4415-9b46-dd113bef3357\" (UID: \"52099491-2311-4415-9b46-dd113bef3357\") " Feb 16 00:28:04 crc kubenswrapper[5114]: I0216 00:28:04.504425 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52099491-2311-4415-9b46-dd113bef3357-kube-api-access-9xtl6" (OuterVolumeSpecName: "kube-api-access-9xtl6") pod "52099491-2311-4415-9b46-dd113bef3357" (UID: "52099491-2311-4415-9b46-dd113bef3357"). InnerVolumeSpecName "kube-api-access-9xtl6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:28:04 crc kubenswrapper[5114]: I0216 00:28:04.559042 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-5n27w_4f2c237a-0f7f-4dd6-a35c-6533fbc3522e/kube-rbac-proxy/0.log" Feb 16 00:28:04 crc kubenswrapper[5114]: I0216 00:28:04.599742 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9xtl6\" (UniqueName: \"kubernetes.io/projected/52099491-2311-4415-9b46-dd113bef3357-kube-api-access-9xtl6\") on node \"crc\" DevicePath \"\"" Feb 16 00:28:04 crc kubenswrapper[5114]: I0216 00:28:04.612326 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-5n27w_4f2c237a-0f7f-4dd6-a35c-6533fbc3522e/machine-api-operator/0.log" Feb 16 00:28:05 crc kubenswrapper[5114]: I0216 00:28:05.133024 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520028-cqrdt" Feb 16 00:28:05 crc kubenswrapper[5114]: I0216 00:28:05.133019 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520028-cqrdt" event={"ID":"52099491-2311-4415-9b46-dd113bef3357","Type":"ContainerDied","Data":"f03391a5719ef43f2fed4cbc746734b139e21b5dfaecf2fea311b04c18f1a57f"} Feb 16 00:28:05 crc kubenswrapper[5114]: I0216 00:28:05.133607 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f03391a5719ef43f2fed4cbc746734b139e21b5dfaecf2fea311b04c18f1a57f" Feb 16 00:28:05 crc kubenswrapper[5114]: I0216 00:28:05.429242 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29520022-kwsf6"] Feb 16 00:28:05 crc kubenswrapper[5114]: I0216 00:28:05.438610 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29520022-kwsf6"] Feb 16 00:28:05 crc kubenswrapper[5114]: I0216 00:28:05.826973 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a185ceb8-ad6c-4fc0-8cce-72142ea846d8" path="/var/lib/kubelet/pods/a185ceb8-ad6c-4fc0-8cce-72142ea846d8/volumes" Feb 16 00:28:16 crc kubenswrapper[5114]: I0216 00:28:16.973135 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-759f64656b-zq6b6_c7fdc724-56e4-4fa4-a89a-cc129d2ce1d8/cert-manager-controller/0.log" Feb 16 00:28:17 crc kubenswrapper[5114]: I0216 00:28:17.176090 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-597b96b99b-l6nlj_c9da52c1-a5fa-4758-a23e-eb1ac46f02c6/cert-manager-webhook/0.log" Feb 16 00:28:17 crc kubenswrapper[5114]: I0216 00:28:17.182659 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-8966b78d4-dhgdr_9f9a21b1-a399-465f-975c-22782affdbe7/cert-manager-cainjector/0.log" Feb 16 00:28:20 crc kubenswrapper[5114]: I0216 00:28:20.085584 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:28:20 crc kubenswrapper[5114]: I0216 00:28:20.087577 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:28:31 crc kubenswrapper[5114]: I0216 00:28:31.214981 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-phf92_13bfd2d1-3c0a-4fc6-a84b-45f3459195b0/prometheus-operator/0.log" Feb 16 00:28:31 crc kubenswrapper[5114]: I0216 00:28:31.323278 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6db558659d-gsmjz_eb532db4-78a1-465f-8c41-ba9de05d7349/prometheus-operator-admission-webhook/0.log" Feb 16 00:28:31 crc kubenswrapper[5114]: I0216 00:28:31.418432 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6db558659d-lrffd_3c3c704d-2d95-41a5-9189-83392c97240e/prometheus-operator-admission-webhook/0.log" Feb 16 00:28:31 crc kubenswrapper[5114]: I0216 00:28:31.544933 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-c8p8q_76206e1f-dcb7-4b06-9980-7bfb8c3c9b02/operator/0.log" Feb 16 00:28:31 crc kubenswrapper[5114]: I0216 00:28:31.634558 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-hqf4v_24f609a1-7bb0-432e-951d-c23dc581bc81/perses-operator/0.log" Feb 16 00:28:45 crc kubenswrapper[5114]: I0216 00:28:45.589760 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf_99cc350b-a6cc-4472-afcb-96cba5c0cf4a/util/0.log" Feb 16 00:28:45 crc kubenswrapper[5114]: I0216 00:28:45.711767 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf_99cc350b-a6cc-4472-afcb-96cba5c0cf4a/util/0.log" Feb 16 00:28:45 crc kubenswrapper[5114]: I0216 00:28:45.735776 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf_99cc350b-a6cc-4472-afcb-96cba5c0cf4a/pull/0.log" Feb 16 00:28:45 crc kubenswrapper[5114]: I0216 00:28:45.778514 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf_99cc350b-a6cc-4472-afcb-96cba5c0cf4a/pull/0.log" Feb 16 00:28:45 crc kubenswrapper[5114]: I0216 00:28:45.941827 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf_99cc350b-a6cc-4472-afcb-96cba5c0cf4a/pull/0.log" Feb 16 00:28:45 crc kubenswrapper[5114]: I0216 00:28:45.952707 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf_99cc350b-a6cc-4472-afcb-96cba5c0cf4a/extract/0.log" Feb 16 00:28:45 crc kubenswrapper[5114]: I0216 00:28:45.956488 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_00e596f74c7ff6aa630d3bf44b91123ebafce6c9d7df4104f82e2338f1ks4wf_99cc350b-a6cc-4472-afcb-96cba5c0cf4a/util/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.103014 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj_6fa0aab5-e3d3-4cb3-8409-296dcc548f30/util/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.258502 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj_6fa0aab5-e3d3-4cb3-8409-296dcc548f30/pull/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.265929 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj_6fa0aab5-e3d3-4cb3-8409-296dcc548f30/pull/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.288727 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj_6fa0aab5-e3d3-4cb3-8409-296dcc548f30/util/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.463631 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj_6fa0aab5-e3d3-4cb3-8409-296dcc548f30/util/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.467072 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj_6fa0aab5-e3d3-4cb3-8409-296dcc548f30/pull/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.478783 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fgjzdj_6fa0aab5-e3d3-4cb3-8409-296dcc548f30/extract/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.631795 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p_02025ac3-beca-451a-8036-70876e1f2439/util/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.805406 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p_02025ac3-beca-451a-8036-70876e1f2439/pull/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.816591 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p_02025ac3-beca-451a-8036-70876e1f2439/pull/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.836423 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p_02025ac3-beca-451a-8036-70876e1f2439/util/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.917429 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p_02025ac3-beca-451a-8036-70876e1f2439/util/0.log" Feb 16 00:28:46 crc kubenswrapper[5114]: I0216 00:28:46.986616 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p_02025ac3-beca-451a-8036-70876e1f2439/extract/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.007053 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5xhc5p_02025ac3-beca-451a-8036-70876e1f2439/pull/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.066430 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz_30a1abbb-4ff1-412e-967c-bfdbe8a5468f/util/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.214354 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jlj6_c4627438-b1a6-4cc9-85f6-10e9dd97943b/kube-multus/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.215235 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-5jlj6_c4627438-b1a6-4cc9-85f6-10e9dd97943b/kube-multus/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.225216 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.225432 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.284083 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz_30a1abbb-4ff1-412e-967c-bfdbe8a5468f/util/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.293810 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz_30a1abbb-4ff1-412e-967c-bfdbe8a5468f/pull/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.335325 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz_30a1abbb-4ff1-412e-967c-bfdbe8a5468f/pull/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.486874 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz_30a1abbb-4ff1-412e-967c-bfdbe8a5468f/extract/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.492699 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz_30a1abbb-4ff1-412e-967c-bfdbe8a5468f/util/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.493388 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f0822kkz_30a1abbb-4ff1-412e-967c-bfdbe8a5468f/pull/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.683390 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x55hq_e9a8a33b-86f6-46d3-9efb-f4395a0a9830/extract-utilities/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.807955 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x55hq_e9a8a33b-86f6-46d3-9efb-f4395a0a9830/extract-utilities/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.807929 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x55hq_e9a8a33b-86f6-46d3-9efb-f4395a0a9830/extract-content/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.829828 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x55hq_e9a8a33b-86f6-46d3-9efb-f4395a0a9830/extract-content/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.987552 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x55hq_e9a8a33b-86f6-46d3-9efb-f4395a0a9830/extract-content/0.log" Feb 16 00:28:47 crc kubenswrapper[5114]: I0216 00:28:47.996761 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x55hq_e9a8a33b-86f6-46d3-9efb-f4395a0a9830/extract-utilities/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.145821 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-x55hq_e9a8a33b-86f6-46d3-9efb-f4395a0a9830/registry-server/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.194501 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sm2s6_e23f1349-18bf-40ca-8419-c94cbe0665a3/extract-utilities/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.362987 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sm2s6_e23f1349-18bf-40ca-8419-c94cbe0665a3/extract-utilities/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.368679 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sm2s6_e23f1349-18bf-40ca-8419-c94cbe0665a3/extract-content/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.382577 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sm2s6_e23f1349-18bf-40ca-8419-c94cbe0665a3/extract-content/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.553532 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sm2s6_e23f1349-18bf-40ca-8419-c94cbe0665a3/extract-content/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.553666 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sm2s6_e23f1349-18bf-40ca-8419-c94cbe0665a3/extract-utilities/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.653930 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-qqbpj_1d23892e-7be3-463c-800d-7cb9ec870736/marketplace-operator/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.741225 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-sm2s6_e23f1349-18bf-40ca-8419-c94cbe0665a3/registry-server/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.755800 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-858qd_c5f57fd8-c18c-4747-9e05-c9061a12908e/extract-utilities/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.887536 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-858qd_c5f57fd8-c18c-4747-9e05-c9061a12908e/extract-content/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.923234 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-858qd_c5f57fd8-c18c-4747-9e05-c9061a12908e/extract-content/0.log" Feb 16 00:28:48 crc kubenswrapper[5114]: I0216 00:28:48.928541 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-858qd_c5f57fd8-c18c-4747-9e05-c9061a12908e/extract-utilities/0.log" Feb 16 00:28:49 crc kubenswrapper[5114]: I0216 00:28:49.098147 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-858qd_c5f57fd8-c18c-4747-9e05-c9061a12908e/extract-utilities/0.log" Feb 16 00:28:49 crc kubenswrapper[5114]: I0216 00:28:49.113407 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-858qd_c5f57fd8-c18c-4747-9e05-c9061a12908e/extract-content/0.log" Feb 16 00:28:49 crc kubenswrapper[5114]: I0216 00:28:49.216367 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-858qd_c5f57fd8-c18c-4747-9e05-c9061a12908e/registry-server/0.log" Feb 16 00:28:49 crc kubenswrapper[5114]: I0216 00:28:49.626483 5114 scope.go:117] "RemoveContainer" containerID="b0603adaa5b6469cbce9719c0194dbc76b099fb9ad70da1138307a7301b2ae4b" Feb 16 00:28:50 crc kubenswrapper[5114]: I0216 00:28:50.084824 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:28:50 crc kubenswrapper[5114]: I0216 00:28:50.084903 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:29:02 crc kubenswrapper[5114]: I0216 00:29:02.223700 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-phf92_13bfd2d1-3c0a-4fc6-a84b-45f3459195b0/prometheus-operator/0.log" Feb 16 00:29:02 crc kubenswrapper[5114]: I0216 00:29:02.243005 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6db558659d-gsmjz_eb532db4-78a1-465f-8c41-ba9de05d7349/prometheus-operator-admission-webhook/0.log" Feb 16 00:29:02 crc kubenswrapper[5114]: I0216 00:29:02.246753 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6db558659d-lrffd_3c3c704d-2d95-41a5-9189-83392c97240e/prometheus-operator-admission-webhook/0.log" Feb 16 00:29:02 crc kubenswrapper[5114]: I0216 00:29:02.383017 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-c8p8q_76206e1f-dcb7-4b06-9980-7bfb8c3c9b02/operator/0.log" Feb 16 00:29:02 crc kubenswrapper[5114]: I0216 00:29:02.390673 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-hqf4v_24f609a1-7bb0-432e-951d-c23dc581bc81/perses-operator/0.log" Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.084643 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.085442 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.085547 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.086575 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6728bdb88e7106d5ac3aae01393284af609b0611c76a30dcea25efd3ae3bc66d"} pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.086664 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" containerID="cri-o://6728bdb88e7106d5ac3aae01393284af609b0611c76a30dcea25efd3ae3bc66d" gracePeriod=600 Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.218792 5114 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.769349 5114 generic.go:358] "Generic (PLEG): container finished" podID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerID="6728bdb88e7106d5ac3aae01393284af609b0611c76a30dcea25efd3ae3bc66d" exitCode=0 Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.769705 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerDied","Data":"6728bdb88e7106d5ac3aae01393284af609b0611c76a30dcea25efd3ae3bc66d"} Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.769729 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"cd6c748eaadab06eb313c349f8074d0b9016ec8e36372d17291cd645c5a33d4b"} Feb 16 00:29:20 crc kubenswrapper[5114]: I0216 00:29:20.769743 5114 scope.go:117] "RemoveContainer" containerID="08e121677631f460690080580c06d5b5374b81d3fbafdd43ec22ad0e68333766" Feb 16 00:29:43 crc kubenswrapper[5114]: I0216 00:29:43.010829 5114 generic.go:358] "Generic (PLEG): container finished" podID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerID="05f751d5ae3295a5129cb310a0b37d6e6936c173e875031f3b1cbc61819e5945" exitCode=0 Feb 16 00:29:43 crc kubenswrapper[5114]: I0216 00:29:43.010953 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-dwkkv/must-gather-dwww7" event={"ID":"b0b7e870-7d23-4439-be1a-b364faf90d09","Type":"ContainerDied","Data":"05f751d5ae3295a5129cb310a0b37d6e6936c173e875031f3b1cbc61819e5945"} Feb 16 00:29:43 crc kubenswrapper[5114]: I0216 00:29:43.012592 5114 scope.go:117] "RemoveContainer" containerID="05f751d5ae3295a5129cb310a0b37d6e6936c173e875031f3b1cbc61819e5945" Feb 16 00:29:43 crc kubenswrapper[5114]: I0216 00:29:43.566601 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwkkv_must-gather-dwww7_b0b7e870-7d23-4439-be1a-b364faf90d09/gather/0.log" Feb 16 00:29:49 crc kubenswrapper[5114]: I0216 00:29:49.846072 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-dwkkv/must-gather-dwww7"] Feb 16 00:29:49 crc kubenswrapper[5114]: I0216 00:29:49.846906 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-dwkkv/must-gather-dwww7"] Feb 16 00:29:49 crc kubenswrapper[5114]: I0216 00:29:49.847344 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-dwkkv/must-gather-dwww7" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerName="copy" containerID="cri-o://370c1eeeb45b89dce99245fdc72724c1caea0045a8a80c83576aa248ed79394f" gracePeriod=2 Feb 16 00:29:49 crc kubenswrapper[5114]: I0216 00:29:49.850054 5114 status_manager.go:895] "Failed to get status for pod" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" pod="openshift-must-gather-dwkkv/must-gather-dwww7" err="pods \"must-gather-dwww7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-dwkkv\": no relationship found between node 'crc' and this object" Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.091377 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwkkv_must-gather-dwww7_b0b7e870-7d23-4439-be1a-b364faf90d09/copy/0.log" Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.092611 5114 generic.go:358] "Generic (PLEG): container finished" podID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerID="370c1eeeb45b89dce99245fdc72724c1caea0045a8a80c83576aa248ed79394f" exitCode=143 Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.314918 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwkkv_must-gather-dwww7_b0b7e870-7d23-4439-be1a-b364faf90d09/copy/0.log" Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.315526 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.316907 5114 status_manager.go:895] "Failed to get status for pod" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" pod="openshift-must-gather-dwkkv/must-gather-dwww7" err="pods \"must-gather-dwww7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-dwkkv\": no relationship found between node 'crc' and this object" Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.450711 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0b7e870-7d23-4439-be1a-b364faf90d09-must-gather-output\") pod \"b0b7e870-7d23-4439-be1a-b364faf90d09\" (UID: \"b0b7e870-7d23-4439-be1a-b364faf90d09\") " Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.450887 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzlt7\" (UniqueName: \"kubernetes.io/projected/b0b7e870-7d23-4439-be1a-b364faf90d09-kube-api-access-zzlt7\") pod \"b0b7e870-7d23-4439-be1a-b364faf90d09\" (UID: \"b0b7e870-7d23-4439-be1a-b364faf90d09\") " Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.457129 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0b7e870-7d23-4439-be1a-b364faf90d09-kube-api-access-zzlt7" (OuterVolumeSpecName: "kube-api-access-zzlt7") pod "b0b7e870-7d23-4439-be1a-b364faf90d09" (UID: "b0b7e870-7d23-4439-be1a-b364faf90d09"). InnerVolumeSpecName "kube-api-access-zzlt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.526967 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0b7e870-7d23-4439-be1a-b364faf90d09-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b0b7e870-7d23-4439-be1a-b364faf90d09" (UID: "b0b7e870-7d23-4439-be1a-b364faf90d09"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.553521 5114 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b0b7e870-7d23-4439-be1a-b364faf90d09-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:50 crc kubenswrapper[5114]: I0216 00:29:50.553565 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zzlt7\" (UniqueName: \"kubernetes.io/projected/b0b7e870-7d23-4439-be1a-b364faf90d09-kube-api-access-zzlt7\") on node \"crc\" DevicePath \"\"" Feb 16 00:29:51 crc kubenswrapper[5114]: I0216 00:29:51.103382 5114 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-dwkkv_must-gather-dwww7_b0b7e870-7d23-4439-be1a-b364faf90d09/copy/0.log" Feb 16 00:29:51 crc kubenswrapper[5114]: I0216 00:29:51.104335 5114 scope.go:117] "RemoveContainer" containerID="370c1eeeb45b89dce99245fdc72724c1caea0045a8a80c83576aa248ed79394f" Feb 16 00:29:51 crc kubenswrapper[5114]: I0216 00:29:51.104420 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-dwkkv/must-gather-dwww7" Feb 16 00:29:51 crc kubenswrapper[5114]: I0216 00:29:51.107960 5114 status_manager.go:895] "Failed to get status for pod" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" pod="openshift-must-gather-dwkkv/must-gather-dwww7" err="pods \"must-gather-dwww7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-dwkkv\": no relationship found between node 'crc' and this object" Feb 16 00:29:51 crc kubenswrapper[5114]: I0216 00:29:51.127115 5114 scope.go:117] "RemoveContainer" containerID="05f751d5ae3295a5129cb310a0b37d6e6936c173e875031f3b1cbc61819e5945" Feb 16 00:29:51 crc kubenswrapper[5114]: I0216 00:29:51.139471 5114 status_manager.go:895] "Failed to get status for pod" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" pod="openshift-must-gather-dwkkv/must-gather-dwww7" err="pods \"must-gather-dwww7\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-dwkkv\": no relationship found between node 'crc' and this object" Feb 16 00:29:51 crc kubenswrapper[5114]: I0216 00:29:51.831293 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" path="/var/lib/kubelet/pods/b0b7e870-7d23-4439-be1a-b364faf90d09/volumes" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.161791 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd"] Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.163286 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerName="gather" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.163343 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerName="gather" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.163508 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="52099491-2311-4415-9b46-dd113bef3357" containerName="oc" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.163591 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="52099491-2311-4415-9b46-dd113bef3357" containerName="oc" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.163612 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerName="copy" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.163621 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerName="copy" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.165428 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerName="gather" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.165459 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="52099491-2311-4415-9b46-dd113bef3357" containerName="oc" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.165471 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="b0b7e870-7d23-4439-be1a-b364faf90d09" containerName="copy" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.170221 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.174408 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.174676 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.180179 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd"] Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.219182 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-secret-volume\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.219347 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-config-volume\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.219421 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmz4k\" (UniqueName: \"kubernetes.io/projected/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-kube-api-access-qmz4k\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.264615 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520030-djxsr"] Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.269142 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520030-djxsr"] Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.269371 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520030-djxsr" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.271531 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.271717 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.271996 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.321833 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rqg\" (UniqueName: \"kubernetes.io/projected/90244195-760b-4567-b121-3e41dba3b310-kube-api-access-d2rqg\") pod \"auto-csr-approver-29520030-djxsr\" (UID: \"90244195-760b-4567-b121-3e41dba3b310\") " pod="openshift-infra/auto-csr-approver-29520030-djxsr" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.321900 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-secret-volume\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.322072 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-config-volume\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.322173 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qmz4k\" (UniqueName: \"kubernetes.io/projected/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-kube-api-access-qmz4k\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.323459 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-config-volume\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.335962 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-secret-volume\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.339962 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmz4k\" (UniqueName: \"kubernetes.io/projected/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-kube-api-access-qmz4k\") pod \"collect-profiles-29520030-7kskd\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.423831 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d2rqg\" (UniqueName: \"kubernetes.io/projected/90244195-760b-4567-b121-3e41dba3b310-kube-api-access-d2rqg\") pod \"auto-csr-approver-29520030-djxsr\" (UID: \"90244195-760b-4567-b121-3e41dba3b310\") " pod="openshift-infra/auto-csr-approver-29520030-djxsr" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.443756 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2rqg\" (UniqueName: \"kubernetes.io/projected/90244195-760b-4567-b121-3e41dba3b310-kube-api-access-d2rqg\") pod \"auto-csr-approver-29520030-djxsr\" (UID: \"90244195-760b-4567-b121-3e41dba3b310\") " pod="openshift-infra/auto-csr-approver-29520030-djxsr" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.537110 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.585618 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520030-djxsr" Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.782124 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd"] Feb 16 00:30:00 crc kubenswrapper[5114]: I0216 00:30:00.821958 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520030-djxsr"] Feb 16 00:30:00 crc kubenswrapper[5114]: W0216 00:30:00.827706 5114 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90244195_760b_4567_b121_3e41dba3b310.slice/crio-f1d1ecae8bffee591343915c80ea8c16b02fb4719118606fd7f87ce81cc4ab4c WatchSource:0}: Error finding container f1d1ecae8bffee591343915c80ea8c16b02fb4719118606fd7f87ce81cc4ab4c: Status 404 returned error can't find the container with id f1d1ecae8bffee591343915c80ea8c16b02fb4719118606fd7f87ce81cc4ab4c Feb 16 00:30:01 crc kubenswrapper[5114]: I0216 00:30:01.220367 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520030-djxsr" event={"ID":"90244195-760b-4567-b121-3e41dba3b310","Type":"ContainerStarted","Data":"f1d1ecae8bffee591343915c80ea8c16b02fb4719118606fd7f87ce81cc4ab4c"} Feb 16 00:30:01 crc kubenswrapper[5114]: I0216 00:30:01.222411 5114 generic.go:358] "Generic (PLEG): container finished" podID="1bd366ad-b999-45e8-9cfd-8a3c1069d6a9" containerID="d391fe3da3f96d6eff5f933c49f3501f17a11ca7718c0641cf552415631fc2c3" exitCode=0 Feb 16 00:30:01 crc kubenswrapper[5114]: I0216 00:30:01.222558 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" event={"ID":"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9","Type":"ContainerDied","Data":"d391fe3da3f96d6eff5f933c49f3501f17a11ca7718c0641cf552415631fc2c3"} Feb 16 00:30:01 crc kubenswrapper[5114]: I0216 00:30:01.222621 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" event={"ID":"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9","Type":"ContainerStarted","Data":"698c4390794b180eaaeb2e7ec0861f802fa99d89aac15dc0958ef6b095effabc"} Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.629686 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.677365 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-config-volume\") pod \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.677472 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmz4k\" (UniqueName: \"kubernetes.io/projected/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-kube-api-access-qmz4k\") pod \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.677681 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-secret-volume\") pod \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\" (UID: \"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9\") " Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.680652 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-config-volume" (OuterVolumeSpecName: "config-volume") pod "1bd366ad-b999-45e8-9cfd-8a3c1069d6a9" (UID: "1bd366ad-b999-45e8-9cfd-8a3c1069d6a9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.683938 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-kube-api-access-qmz4k" (OuterVolumeSpecName: "kube-api-access-qmz4k") pod "1bd366ad-b999-45e8-9cfd-8a3c1069d6a9" (UID: "1bd366ad-b999-45e8-9cfd-8a3c1069d6a9"). InnerVolumeSpecName "kube-api-access-qmz4k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.687863 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1bd366ad-b999-45e8-9cfd-8a3c1069d6a9" (UID: "1bd366ad-b999-45e8-9cfd-8a3c1069d6a9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.779693 5114 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.779732 5114 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-config-volume\") on node \"crc\" DevicePath \"\"" Feb 16 00:30:02 crc kubenswrapper[5114]: I0216 00:30:02.779751 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qmz4k\" (UniqueName: \"kubernetes.io/projected/1bd366ad-b999-45e8-9cfd-8a3c1069d6a9-kube-api-access-qmz4k\") on node \"crc\" DevicePath \"\"" Feb 16 00:30:03 crc kubenswrapper[5114]: I0216 00:30:03.246692 5114 generic.go:358] "Generic (PLEG): container finished" podID="90244195-760b-4567-b121-3e41dba3b310" containerID="6dad26f5fb9797f97425997b06961ee2ec7deda08cfd6ab43c04ef1b532e8ad8" exitCode=0 Feb 16 00:30:03 crc kubenswrapper[5114]: I0216 00:30:03.247035 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520030-djxsr" event={"ID":"90244195-760b-4567-b121-3e41dba3b310","Type":"ContainerDied","Data":"6dad26f5fb9797f97425997b06961ee2ec7deda08cfd6ab43c04ef1b532e8ad8"} Feb 16 00:30:03 crc kubenswrapper[5114]: I0216 00:30:03.250463 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" event={"ID":"1bd366ad-b999-45e8-9cfd-8a3c1069d6a9","Type":"ContainerDied","Data":"698c4390794b180eaaeb2e7ec0861f802fa99d89aac15dc0958ef6b095effabc"} Feb 16 00:30:03 crc kubenswrapper[5114]: I0216 00:30:03.250521 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="698c4390794b180eaaeb2e7ec0861f802fa99d89aac15dc0958ef6b095effabc" Feb 16 00:30:03 crc kubenswrapper[5114]: I0216 00:30:03.250631 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29520030-7kskd" Feb 16 00:30:04 crc kubenswrapper[5114]: I0216 00:30:04.626366 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520030-djxsr" Feb 16 00:30:04 crc kubenswrapper[5114]: I0216 00:30:04.711857 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2rqg\" (UniqueName: \"kubernetes.io/projected/90244195-760b-4567-b121-3e41dba3b310-kube-api-access-d2rqg\") pod \"90244195-760b-4567-b121-3e41dba3b310\" (UID: \"90244195-760b-4567-b121-3e41dba3b310\") " Feb 16 00:30:04 crc kubenswrapper[5114]: I0216 00:30:04.717493 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90244195-760b-4567-b121-3e41dba3b310-kube-api-access-d2rqg" (OuterVolumeSpecName: "kube-api-access-d2rqg") pod "90244195-760b-4567-b121-3e41dba3b310" (UID: "90244195-760b-4567-b121-3e41dba3b310"). InnerVolumeSpecName "kube-api-access-d2rqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:30:04 crc kubenswrapper[5114]: I0216 00:30:04.816009 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d2rqg\" (UniqueName: \"kubernetes.io/projected/90244195-760b-4567-b121-3e41dba3b310-kube-api-access-d2rqg\") on node \"crc\" DevicePath \"\"" Feb 16 00:30:05 crc kubenswrapper[5114]: I0216 00:30:05.304323 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520030-djxsr" event={"ID":"90244195-760b-4567-b121-3e41dba3b310","Type":"ContainerDied","Data":"f1d1ecae8bffee591343915c80ea8c16b02fb4719118606fd7f87ce81cc4ab4c"} Feb 16 00:30:05 crc kubenswrapper[5114]: I0216 00:30:05.304731 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d1ecae8bffee591343915c80ea8c16b02fb4719118606fd7f87ce81cc4ab4c" Feb 16 00:30:05 crc kubenswrapper[5114]: I0216 00:30:05.304892 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520030-djxsr" Feb 16 00:30:05 crc kubenswrapper[5114]: I0216 00:30:05.699355 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29520024-qzwr4"] Feb 16 00:30:05 crc kubenswrapper[5114]: I0216 00:30:05.705477 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29520024-qzwr4"] Feb 16 00:30:05 crc kubenswrapper[5114]: I0216 00:30:05.828016 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ded7593-ae73-4c96-ad73-bfe65049750b" path="/var/lib/kubelet/pods/2ded7593-ae73-4c96-ad73-bfe65049750b/volumes" Feb 16 00:30:49 crc kubenswrapper[5114]: I0216 00:30:49.763680 5114 scope.go:117] "RemoveContainer" containerID="0c392a92da7f8d0ea384a50a29794605f90d387390a5533f0b687b0b30e19671" Feb 16 00:31:20 crc kubenswrapper[5114]: I0216 00:31:20.085499 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:31:20 crc kubenswrapper[5114]: I0216 00:31:20.086148 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:31:50 crc kubenswrapper[5114]: I0216 00:31:50.085142 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:31:50 crc kubenswrapper[5114]: I0216 00:31:50.085966 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.152029 5114 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29520032-426mj"] Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.153106 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1bd366ad-b999-45e8-9cfd-8a3c1069d6a9" containerName="collect-profiles" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.153125 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd366ad-b999-45e8-9cfd-8a3c1069d6a9" containerName="collect-profiles" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.153138 5114 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="90244195-760b-4567-b121-3e41dba3b310" containerName="oc" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.153146 5114 state_mem.go:107] "Deleted CPUSet assignment" podUID="90244195-760b-4567-b121-3e41dba3b310" containerName="oc" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.153402 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="1bd366ad-b999-45e8-9cfd-8a3c1069d6a9" containerName="collect-profiles" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.153433 5114 memory_manager.go:356] "RemoveStaleState removing state" podUID="90244195-760b-4567-b121-3e41dba3b310" containerName="oc" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.176428 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520032-426mj"] Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.176570 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520032-426mj" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.179765 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.181436 5114 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-zrknt\"" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.181721 5114 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.266791 5114 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpdh5\" (UniqueName: \"kubernetes.io/projected/489acec7-42e6-41a1-bfd5-4c319c50f384-kube-api-access-wpdh5\") pod \"auto-csr-approver-29520032-426mj\" (UID: \"489acec7-42e6-41a1-bfd5-4c319c50f384\") " pod="openshift-infra/auto-csr-approver-29520032-426mj" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.369042 5114 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wpdh5\" (UniqueName: \"kubernetes.io/projected/489acec7-42e6-41a1-bfd5-4c319c50f384-kube-api-access-wpdh5\") pod \"auto-csr-approver-29520032-426mj\" (UID: \"489acec7-42e6-41a1-bfd5-4c319c50f384\") " pod="openshift-infra/auto-csr-approver-29520032-426mj" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.412830 5114 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpdh5\" (UniqueName: \"kubernetes.io/projected/489acec7-42e6-41a1-bfd5-4c319c50f384-kube-api-access-wpdh5\") pod \"auto-csr-approver-29520032-426mj\" (UID: \"489acec7-42e6-41a1-bfd5-4c319c50f384\") " pod="openshift-infra/auto-csr-approver-29520032-426mj" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.516865 5114 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520032-426mj" Feb 16 00:32:00 crc kubenswrapper[5114]: I0216 00:32:00.985405 5114 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29520032-426mj"] Feb 16 00:32:01 crc kubenswrapper[5114]: I0216 00:32:01.434874 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520032-426mj" event={"ID":"489acec7-42e6-41a1-bfd5-4c319c50f384","Type":"ContainerStarted","Data":"7313bfbba3261d55bc44b63cbbdc8865dda5fe86980759ea3c8b75218ca26a19"} Feb 16 00:32:02 crc kubenswrapper[5114]: I0216 00:32:02.443370 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520032-426mj" event={"ID":"489acec7-42e6-41a1-bfd5-4c319c50f384","Type":"ContainerStarted","Data":"bc8ca270f6fa9f8cb08a1379a7fd5f3224a36ccd5951952938214a99e342e985"} Feb 16 00:32:02 crc kubenswrapper[5114]: I0216 00:32:02.471507 5114 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29520032-426mj" podStartSLOduration=1.38802413 podStartE2EDuration="2.471487351s" podCreationTimestamp="2026-02-16 00:32:00 +0000 UTC" firstStartedPulling="2026-02-16 00:32:00.988146664 +0000 UTC m=+1397.369423522" lastFinishedPulling="2026-02-16 00:32:02.071609885 +0000 UTC m=+1398.452886743" observedRunningTime="2026-02-16 00:32:02.464324209 +0000 UTC m=+1398.845601057" watchObservedRunningTime="2026-02-16 00:32:02.471487351 +0000 UTC m=+1398.852764179" Feb 16 00:32:03 crc kubenswrapper[5114]: I0216 00:32:03.454932 5114 generic.go:358] "Generic (PLEG): container finished" podID="489acec7-42e6-41a1-bfd5-4c319c50f384" containerID="bc8ca270f6fa9f8cb08a1379a7fd5f3224a36ccd5951952938214a99e342e985" exitCode=0 Feb 16 00:32:03 crc kubenswrapper[5114]: I0216 00:32:03.455004 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520032-426mj" event={"ID":"489acec7-42e6-41a1-bfd5-4c319c50f384","Type":"ContainerDied","Data":"bc8ca270f6fa9f8cb08a1379a7fd5f3224a36ccd5951952938214a99e342e985"} Feb 16 00:32:04 crc kubenswrapper[5114]: I0216 00:32:04.812259 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520032-426mj" Feb 16 00:32:04 crc kubenswrapper[5114]: I0216 00:32:04.890389 5114 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29520026-lmzkb"] Feb 16 00:32:04 crc kubenswrapper[5114]: I0216 00:32:04.894691 5114 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29520026-lmzkb"] Feb 16 00:32:04 crc kubenswrapper[5114]: I0216 00:32:04.950524 5114 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpdh5\" (UniqueName: \"kubernetes.io/projected/489acec7-42e6-41a1-bfd5-4c319c50f384-kube-api-access-wpdh5\") pod \"489acec7-42e6-41a1-bfd5-4c319c50f384\" (UID: \"489acec7-42e6-41a1-bfd5-4c319c50f384\") " Feb 16 00:32:04 crc kubenswrapper[5114]: I0216 00:32:04.970532 5114 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/489acec7-42e6-41a1-bfd5-4c319c50f384-kube-api-access-wpdh5" (OuterVolumeSpecName: "kube-api-access-wpdh5") pod "489acec7-42e6-41a1-bfd5-4c319c50f384" (UID: "489acec7-42e6-41a1-bfd5-4c319c50f384"). InnerVolumeSpecName "kube-api-access-wpdh5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Feb 16 00:32:05 crc kubenswrapper[5114]: I0216 00:32:05.053106 5114 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wpdh5\" (UniqueName: \"kubernetes.io/projected/489acec7-42e6-41a1-bfd5-4c319c50f384-kube-api-access-wpdh5\") on node \"crc\" DevicePath \"\"" Feb 16 00:32:05 crc kubenswrapper[5114]: I0216 00:32:05.473723 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29520032-426mj" event={"ID":"489acec7-42e6-41a1-bfd5-4c319c50f384","Type":"ContainerDied","Data":"7313bfbba3261d55bc44b63cbbdc8865dda5fe86980759ea3c8b75218ca26a19"} Feb 16 00:32:05 crc kubenswrapper[5114]: I0216 00:32:05.473762 5114 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7313bfbba3261d55bc44b63cbbdc8865dda5fe86980759ea3c8b75218ca26a19" Feb 16 00:32:05 crc kubenswrapper[5114]: I0216 00:32:05.473823 5114 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29520032-426mj" Feb 16 00:32:05 crc kubenswrapper[5114]: I0216 00:32:05.831566 5114 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="172de72d-25e0-43a0-b18d-2e8c7e548a80" path="/var/lib/kubelet/pods/172de72d-25e0-43a0-b18d-2e8c7e548a80/volumes" Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.085142 5114 patch_prober.go:28] interesting pod/machine-config-daemon-vp5kn container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.085791 5114 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.085839 5114 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.086471 5114 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cd6c748eaadab06eb313c349f8074d0b9016ec8e36372d17291cd645c5a33d4b"} pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.086522 5114 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" podUID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerName="machine-config-daemon" containerID="cri-o://cd6c748eaadab06eb313c349f8074d0b9016ec8e36372d17291cd645c5a33d4b" gracePeriod=600 Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.634838 5114 generic.go:358] "Generic (PLEG): container finished" podID="b6929dc4-3c97-49e3-b4c6-cc35d5e7b917" containerID="cd6c748eaadab06eb313c349f8074d0b9016ec8e36372d17291cd645c5a33d4b" exitCode=0 Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.634915 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerDied","Data":"cd6c748eaadab06eb313c349f8074d0b9016ec8e36372d17291cd645c5a33d4b"} Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.635621 5114 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-vp5kn" event={"ID":"b6929dc4-3c97-49e3-b4c6-cc35d5e7b917","Type":"ContainerStarted","Data":"d774260cd280da4d314d4664a08144c82dfb33885d811a35b102ce0ac3f793ef"} Feb 16 00:32:20 crc kubenswrapper[5114]: I0216 00:32:20.635643 5114 scope.go:117] "RemoveContainer" containerID="6728bdb88e7106d5ac3aae01393284af609b0611c76a30dcea25efd3ae3bc66d" Feb 16 00:32:49 crc kubenswrapper[5114]: I0216 00:32:49.922553 5114 scope.go:117] "RemoveContainer" containerID="4805d2bf685e7178614a71b00f0919700c7fb9fd40dd4169f474ee457c41b31a" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515144462726024460 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015144462727017376 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015144457373016522 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015144457374015473 5ustar corecore